How to get IP addresses by using Ansible? - ansible

I have two DigitalOcean Droplets with public and private IP addresses, created by Vagrant and Ansible playbook.
I need to create was_route53 records for each address (two records per droplet)
How I can get addresses into vars to use it in playbook?

I made a playbook this weekend to do something very similar.
In my case, I create a droplet on digitalocean, save the IP address, and then add a DNS record to AWS.
I hope this helps. Note that the playbook still has some things hardcoded, such as region and size because I have not polished it off yet.
Also, this is the first time I have used ansible with DO and AWS, and it's the first time that I have used the "register" feature to save variables, so this is probably a long way from best practice.
One thing that seems ugly is my hardcoding of my venv python interpreter.
If you are happy to use your system python, then you don't need to worry about that. The problem is that when ansible sees connection: local, it uses the system python, which is a bit odd since the script is running in a venv on the same machine.
You need to pip install boto3
and:
ansible-galaxy collection install community.digitalocean
ansible-galaxy collection install community.aws
example playbook
---
- hosts: all
connection: local
become_user: tim
vars: #local connection defaults to using the system python
ansible_python_interpreter: /home/tim/pycharm_projects/django_api_sync/ansible/venv/bin/python3
vars_files:
- includes/secret_variables.yml
tasks:
- name: create a DigitalOcean Droplet
community.digitalocean.digital_ocean_droplet:
state: present
name: "{{droplet_name}}"
oauth_token: "{{digital_ocean_token}}"
size: "s-2vcpu-2gb"
region: SGP1
monitoring: yes
unique_name: yes
image: ubuntu-20-04-x64
wait_timeout: 500
ssh_keys: [ "{{digital_ocean_ssh_fingerprint}}"]
register: my_droplet
- name: Print IP address
ansible.builtin.debug:
msg: Droplet IP address is {{ my_droplet.data.ip_address }}
- name: Add A record with route53
community.aws.route53:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
state: present
zone: growthpath.com.au
record: "{{ ansible_host }}"
type: A
ttl: 7200
value: "{{ my_droplet.data.ip_address }}"
wait: yes
Example inventory file:
all:
hosts:
test-dear1.growthpath.com.au:
droplet_name: test-dear1
ansible_host: test-dear1.growthpath.com.au
ansible-playbook -i inventory_aws_test.yml -v create_new_droplet.yml

Related

Create Ec2 instance and install Python package using Ansible Playbook

I've created an ansible playbook for create an Ec2 instance and Install python to connect the server via ssh.
The Playbook successfully created an EC2 instance but it doesn't install the python on newly created Ec2 Instance instead it installing the python on my Master Machine.
Can someone help me to resolve this.
My Code:
- hosts: localhost
remote_user: ubuntu
become: yes
tasks:
- name: I'm going to create a Ec2 instance
ec2:
key_name: yahoo
instance_type: t2.micro
region: "ap-south-1"
image: ami-0860c9429baba6ad2
count: 1
vpc_subnet_id: subnet-aa84fbe6
assign_public_ip: yes
tags:
- creation
- name: Going to Install Python
apt:
name: python
state: present
tags:
- Web
- name: Start the service
service:
name: python
state: started
tags:
- start
As is shown in the fine manual, ec2: operations should be combined with add_host: in order to "pivot" off of localhost over to the newly provisioned instance, which one cannot add to an inventory file because it doesn't exist yet
- hosts: localhost
gather_facts: no
tasks:
- name: I'm going to create a Ec2 instance
ec2:
...etc etc
register: run_instances
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: just_launched
loop: "{{ run_instances.instances }}"
- name: Wait for SSH to come up
delegate_to: "{{ item.public_ip }}"
wait_for_connection:
delay: 60
timeout: 320
loop: "{{ run_instances.instances }}"
- hosts: just_launched
# and now you can do fun things to those ec2 instances
I suspect your question about "installing python" is unrelated, but just in case: if you genuinely really do need to add python to those ec2 instances, you cannot use most ansible modules to do that because they're written in python. But that is what the raw: module is designed to fix
- hosts: just_launched
# you cannot gather facts without python, either
gather_facts: no
tasks:
- raw: |
echo watch out for idempotency issues with your playbook using raw
yum install python
- name: NOW gather the facts
setup:
- command: echo and now you have working ansible modules again
I also find your invocation of service: { name: python, state: started } suspicious, but I would guess it has not run yet due to your question here

Ansible creates directory in the control machine even if delegate_to set to remote when running the playbook with local connection

I'm running an ansible-playbook configured to provision ec2 and configure the machine. I set the connection to local for the playbook because no machine to manage before the script runs. Once provisioned, I supposed to create directory in the remote server. Since the playbook runs in local connection, I set to delete_to: {{ remote_host }} so this directory creation executed in remote host but it still creates the directory in the control machine.
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ubuntu
gather_facts: false
vars_files:
- vars/env.yml
vars:
allow_world_readable_tmpfiles: true
key_name: ansible-test
region: us-east-2
image: ami-0e82959d4ed12de3f # Ubuntu 18.04
id: "practice-akash-ajay"
sec_group: "{{ id }}-sec"
remote_host: ansible-test
remaining_days: 20
acme_directory: https://acme-staging-v02.api.letsencrypt.org/directory
# acme_directory: https://acme-v02.api.letsencrypt.org/directory
cert_name: "{{ app_slug }}.{{ app_domain}}"
intermediate_path: /etc/pki/letsencrypt/intermediate.pem
cert:
common_name: "{{ app_slug }}.{{ app_domain}}"
organization_name: PearlThoughts
email_address: "{{ letsencrypt_email }}"
subject_alt_name:
- "DNS:{{ app_slug }}.{{ app_domain}}"
roles:
- aws
- name: Create certificate storage directory
file:
dest: "{{item.path}}"
mode: 0750
state: directory
delegate_to: {{ remote_host }}
with_items:
- path: ~/lets-seng-test
When you set connection explicitly on the play, it will be used for all tasks in that play. So don't do that. Ansible will by default use a local connection for localhost unless you have explicitly changed that in your inventory (and again, don't do that).
If you remove the connection setting on your play, delegate_to might work the way you expect...but I don't think you want to do that.
If you have your playbook provisioning a new host for you, the way to target that host with Ansible is to have a new play with that host (or it's corresponding group) listed in the target hosts: for the play. Conceptually, you want:
- host: localhost
tasks:
- name: provisiong an AWS instance
aws_ec2: [...]
register: hostinfo
- add_host:
name: myhost
ansible_host: "{{ hostinfo... }}"
- hosts: myhost
tasks:
- name: do something on the new host
command: uptime
You probably need some logic in between provisioning the host and executing tasks against it to ensure that it is up and ready to service requests.
Instead of using the add_host module, a better solution is often to rely on the appropriate inventory plugin.

Ansible Playbook with Variables?

Environment:
Ubuntu 16.04 in Azure vm
ansible 2.4.3.0
python version = 2.7.12 (default, Nov 20 2017, 18:23:56) [GCC 5.4.0 20160609]
azure-cli (2.0.27)
I have successfully created my first vm in azure using an Ansible Playbook.
Now, my question is this: Can I use variables in the playbook for things like admin-password, vmname etc, so that I could, for instance, put a list of 10 machines I want to create in a file with the different names and their passwords, and then loop through and feed that info through the playbook command to the playbook.yml file?
I'm open to any way of doing this, and I'm aware that Ansible may already have an easier/normal way of doing it, I just don't know enough to google what I need.
EDIT:
Evidently my question is too broad. I'll try and be more specific. I looked at the link that Konstantin Suvorov left. As I understand it, the method described does the inverse of what I'm after, by letting you assign variables to groups. And in the examples, to existing machines.
What I need is to be able to create vms, in the normal YAML style, but be able to have the process loop in some way so that it will create machines 1 thru 10 with all the same info, save for the machine name and the password. The examples above seem to use the variables to assign the same info to multiple machines.
How Can I do this? I'm mostly a bash scripter, so what I'd do is put name:password for each machine on multiple lines of a file, loop thru it and run the ansible-playbook command once per loop to create each machine, feeding in the variables somehow. However, I have a feeling that there's a more efficient way, such that I run the command once and the command will create each machine with the variables/info that are coded (somehow) into the yaml file. Is this specific enough?
FURTHER EDIT:
Thanks for putting me on the right track. I have adapted the script i used to create one vm to try to make it create two. I am getting assorted errors, none of which seem to actually point to offending code. Here follows what I'm trying:
- name: Create Azure VM
hosts: localhost
connection: local
with_items:
- { vmname: 'testvmfordb', userpassword: 'my password' }
- { vmname: 'testvmforfe', userpassword: 'my other password' }
tasks:
- name: Create virtual network inteface card
azure_rm_networkinterface:
resource_group: my_rg
name: "{{ item.vmname }}NIC"
virtual_network: my_rg
subnet: default
public_ip_name: "{{ item.vmname }}IP"
security_group: my_firewall_rules
- name: Create VM
azure_rm_virtualmachine:
resource_group: my_rg
name: "{{ item.vmname }}"
vm_size: Standard_DS1_v2
location: EastUS
admin_username: myusername
admin_password: "{{ item.userpassword }}"
ssh_password_enabled: true
storage_container: vhds
storage_blob: "{{ item.vmname }}osdisk.vhd"
network_interfaces: "{{ item.vmname }}NIC"
image:
offer: UbuntuServer
publisher: Canonical
sku: '16.04-LTS'
version: latest
os_type: Linux
Thanks in advance!
I followed #KonstantinSuvorov suggestion and separated the creation tasks in a separate file. I got this message:
TASK [include_tasks] *******************************************************************************************************************************
task path: /home/user/ansible_playbooks/azure_create_many_vms.yml:5
fatal: [localhost]: FAILED! => {
"reason": "included task files must contain a list of tasks"
}
fatal: [localhost]: FAILED! => {
"reason": "included task files must contain a list of tasks"
}
Here's the files/code:
azure_create_many_vms.yml
- name: Create Azure VM
hosts: localhost
connection: local
tasks:
- include_tasks: azure_create_many_vms_tasks.yml
with_items:
- { vmname: 'testvmforelastic', userpassword: 'my user password' }
- { vmname: 'testvmforfe', userpassword: 'my user password' }
azure_create_many_vms_tasks.yml
tasks:
- name: Create virtual network inteface card
azure_rm_networkinterface:
resource_group: my_core_rg
name: "{{ item.vmname }}NIC"
virtual_network: my_core_rg
subnet: default
public_ip_name: "{{ item.vmname }}IP"
security_group: my_core_firewall_rules
- name: Create VM
azure_rm_virtualmachine:
resource_group: my_core_rg
name: "{{ item.vmname }}"
vm_size: Standard_DS1_v2
location: EastUS
admin_username: adminuser
admin_password: "{{ item.userpassword }}"
ssh_password_enabled: true
storage_container: vhds
storage_blob: "{{ item.vmname }}osdisk.vhd"
network_interfaces: "{{ item.vmname }}NIC"
image:
offer: UbuntuServer
publisher: Canonical
sku: '16.04-LTS'
version: latest
os_type: Linux
You should remove tasks keyword from included file, leave just a plain list of tasks like this:
- name: Create virtual network inteface card
azure_rm_networkinterface:
...
- name: Create VM
azure_rm_virtualmachine:
...

Cannot access machine after creating trough ec2 module within same script

I have problems with my playbook which should create new EC2 instances trough built-in module and connect to them to set some default stuff.
I went trough lot of tutorials/posts, but none of them mentioned same problem, therefor i'm asking there.
Everything, in terms of creating goes well, but when i have instances created, and successfully waited for SSH to come up. I got error which says machine is unreachable.
UNREACHABLE! => {"changed": false, "msg": "ERROR! SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
I tried to connect manually (from terminal to the same host) and i was successful (while the playbook was waiting for connection). I also tried to increase timeout generally in ansible.cfg. I verified that given hostname is valid (and it is) and also tried public ip instead of public DNS, but nothing helps.
basically my playbook looks like that
---
- name: create ec2 instances
hosts: local
connection: local
gather_facts: False
vars:
machines:
- { type: "t2.micro", instance_tags: { Name: "machine1", group: "some_group" }, security_group: ["SSH"] }
tasks:
- name: lunch new ec2 instances
local_action: ec2
group={{ item.security_group }}
instance_type={{ item.type}}
image=...
wait=true
region=...
keypair=...
count=1
instance_tags=...
with_items: machines
register: ec2
- name: wait for SSH to come up
local_action: wait_for host={{ item.instances.0.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.results
- name: add host into launched group
add_host: name={{ item.instances.0.public_ip }} group=launched
with_items: ec2.results
- name: with the newly provisioned EC2 node configure basic stuff
hosts: launched
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- common
Note: in many tutorials are results from creating ec2 instances accessed in different way, but thats probably for different question.
Thanks
Solved:
I don't know how, but it suddenly started to work. No clue. In case i will find some new info, will update this question
A couple points that may help:
I'm guessing it's a version difference, but I've never seen a 'results' key in the registered 'ec2' variable. In any case, I usually use 'tagged_instances' -- this ensures that even if the play didn't create an instance (ie, because a matching instance already existed from a previous run-through), the variable will still return instance data you can use to add a new host to inventory.
Try adding 'search_regex: "OpenSSH"' to your 'wait_for' play to ensure that it's not trying to run before the SSH daemon is completely up.
The modified plays would look like this:
- name: wait for SSH to come up
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started search_regex="OpenSSH"
with_items: ec2.tagged_instances
- name: add host into launched group
add_host: name={{ item.public_ip }} group=launched
with_items: ec2.tagged_instances
You also, of course, want to make sure that Ansible knows to use the specified key when SSH'ing to the remote host either by adding 'ansible_ssh_private_key_file' to the inventory entry or specifying '--private-key=...' on the command line.

Use Ansible control master's IP address in Jinja template

I would like to insert an IP address in to a J2 template which is used by an Ansible playbook. That IP adress is not the address of the host which is being provisioned, but the IP of the host from which the provisioning is done. Everything I have found so far covers using variables/facts related to the hosts being provisioned.
In other words: the IP I’d like to insert is the one in ['ansible_default_ipv4']['address'] when executing ansible -m setup 127.0.0.1.
I think that I could use a local playbook to write a dynamically generated template file containing the IP, but I was hoping that this might be possible “the Ansible way”.
Just use this:
{{ ansible_env["SSH_CLIENT"].split()[0] }}
You can force Ansible to fetch facts about the control host by running the setup module locally by using either a local_action or delegate_to. You can then either register that output and parse it or simply use set_fact to give it a useful name to use later in your template.
An example play might look something like:
tasks:
- name: setup
setup:
delegate_to: 127.0.0.1
- name: set ansible control host IP fact
set_fact:
ansible_control_host_address: "{{ hostvars[inventory_hostname]['ansible_eth0']['ipv4']['address'] }}"
delegate_to: 127.0.0.1
- name: template file with ansible control host IP
template:
src: /path/to/template.j2
dest: /path/to/destination/file
And then use the ansible_control_host_address variable in your template as normal:
...
Ansible control host IP: {{ ansible_control_host_address }}
...
This is how I solved the same problem (Ansible 2.7):
- name: Get local IP address
setup:
delegate_to: 127.0.0.1
delegate_facts: yes
- name: Add to hosts file
become: yes
lineinfile:
line: "{{ hostvars['127.0.0.1']['ansible_default_ipv4']['address'] }} myhost"
path: /etc/hosts
Seems to work like a charm. :-)

Resources