Finding ansible RDS instances - ansible

I am trying to configure with ansible my EC2 instances dynamically. I am having a problem working out how to find my RDS instances. I can set key tags but ansible ec2.py doesn't pick them up (https://github.com/ansible/ansible/issues/7564). Does any one have any suggestions?
So for instance I want an RDS instance for production, staging and for just for testing.

If you mean the ansible ec2.py inventory script doesn't pick up RDS instances then yes I believe you're right, it will only find EC2 instances.
We have a similar setup with a seperate RDS instance for staging and production environments. The way we solved it was for any playbooks/roles that need to run against the mysql database, we run them against the magic host "localhost", and have the RDS endpoints set in variables. We use a separate variable file per environment and load them in at the beginning of the play.
e.g.
|--vars/
| |--staging.yml
| |--production.yml
|
|--playbook.yml
Example "production.yml" file:
---
DB_SERVER: database-endpoint.cls4o6q35lol.eu-west-1.rds.amazonaws.com
DB_PORT: 3306
DB_USER: dbusername
DB_PASSWORD: dbpassword
Example playbook that creates a database
- name: Playbook name
hosts: localhost
vars_files:
- vars/{{ env }}.yml
tasks:
- mysql_db: login_host={{ DB_SERVER }}
login_user={{ DB_USER }}
login_password={{ DB_PASSWORD }}
login_port={{ DB_PORT }}
collation=utf8_general_ci
encoding=utf8
name=databasename
state=present
Then you can just specifiy the envionrment variable when you run the playbook.
ansible-playbook playbook.yml --extra-vars "env=production"

The other answer is wrong now (if I'm reading the question correctly). In the same dir as your ec2.py, add a ec2.ini file, and add:
ec2.ini
[ec2]
rds = true
I had a similar issue but the docs clearly state that ec2.py can be used to find other resources.
Ansible Dynamic Inventory
There are other config options in ec2.ini, including cache control and destination variables. By default, the ec2.ini file is configured for all Amazon cloud services, but you can comment out any features that aren’t applicable. For example, if you don’t have RDS or elasticache, you can set them to False
Edit: However, I'd also like to highlight that even though it states all resources are supported by default, I didn't get RDS results until I specified I wanted them in the ec2.ini file.

Related

Ansible where set ansible_user and ansible_ssh_pass in role

I am creating my first role in ansible to install packages through apt as it is a task I usually do on a daily basis.
apt
├── apt_install_package.yaml
├── server.yaml
├── tasks
│ └── main.yaml
└── vars
└── main.yaml
# file: apt_install_package.yaml
- name: apt role
hosts: server
gather_facts: no
become: yes
roles:
- apt
# tasks
- name: install package
apt:
name: "{{ package }}"
state: present
update_cache: True
with_items: "{{ package }}"
#vars
---
package:
- nginx
- supervisor
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
In order to install the packages I must indicate a user of ssh but I do not know where to indicate it.
My idea is to indicate parameters with variables, something similar to this
ansible_user: "{{ myuser }}"
ansible_ssh_pass: "{{ mypass }}"
ansible_become_pass: "{{ mypass }}"
Any sugestions??
Regards,
As per the latest documentation from Ansible regarding inventory sources, an SSH password can be configured using the ansible_password keyword while an SSH user is specified by ansible_user keyword, for any given host or host group entry.
It is worth mentioning that in order to implement SSH password login for hosts with Ansible, the sshpass program is required on the controller. Any plays will otherwise fail due to an error injecting the password while initializing connections.
Define these keywords within your existing inventory in the following manner:
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
ansible_user: <username>
ansible_password: <password>
ansible_become_password; <password>
The YAML plugin will parse your host configuration parameters from the inventory source and they will be appended host/group variables used for establishing an SSH connection.
The major disadvantage in this scenario is the fact that your authentication secrets will be stored in plain text, and usually Ansible is IaC committed to source control repositories. Unacceptable in production environments.
Consider using a lookup plugin such as env or file to render password secrets dynamically from the controller and prevent leakage.
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
ansible_user: "{{ lookup('env', 'MY_SECRET_ANSIBLE_USER') }}"
ansible_password: "{{ lookup('file', '/path/to/password') }}"
ansible_become_password; "{{ lookup('env', 'MY_SECRET_BECOME_PASSWORD') }}"
Other lookup plugins may also be of use to you depending on your use-case, and can be further researched here.
Alternatively, you could designate a default remote connection user, connection user password, and become password separately within your Ansible configuration file, usually located at /etc/ansible/ansible.cfg.
Read more about available configuration options in Ansible here.
All three mentioned configuration options can be specified under the [defaults] INI section as:
remote_user: default connection username
connection_password_file: default connection password file
become_password_file: default become password file
NOTE: connection_password_file and become_password_file must be a filesystem path containing the password with which Ansible should use to login or escalate privileges. Storing a default password as plain-text string within the configuration file is not supported.
Another option involves environment variables being present at the time of playbook execution. Either export or pass them explicitly via the command line at the time of playbook execution.
ANSIBLE_REMOTE_USER: default connection username
ANSIBLE_CONNECTION_PASSWORD_FILE: default connection password file
ANSIBLE_BECOME_PASSWORD_FILE: default become password file
Such as the following:
ANSIBLE_BECOME_PASSWORD_FILE="/path/to/become_password" ansible-playbook ...
This approach is considerably less efficient unless you update the shell profile of the Ansible run user on the controller to set these variables upon login automatically. Generally not recommended for sensitive data such as passwords.
In regards to security, SSH key-pair authentication is preferred whenever possible. However, the second best approach would be to vault the sensitive passwords using ansible-vault.
Vault allows you to encrypt data at rest, and reference it within inventory/plays without risk of compromise.
Review this link to gain understanding of available procedures and get started.

set ansible-playbook user variable dynamically based on the ec2 distros

I'm creating an ansible playbook that goes through a group of AWS EC2 hosts and install some basic packages. Before the playbook can execute any tasks, the playbook needs to login to each host (2 type of distros AWS Linux or Ubuntu) with correct user: {{ userXXX }} this is the part that I'm not too sure how to pass in the correct user login, it would be either ec2-user or ubuntu.
- name: setup package agent
hosts: ec2_distros_hosts
user: "{{ ansible_user_id }}"
roles:
- role: package_agent_install
I was assuming ansible_user_id would work based of the reserved variable from ansible but that is not the case here. I don't want to create 2 separate playbook for different distros, is there an elegant solution to dynamically lookup user login and used as the user: ?
Here is the failed cmd with unknown user ansible-playbook -i inventory/ec2.py agent.yml
You have several ways to accomplish your task:
1. Create ansible user with the same name on every host
If you have one, you can use user: ansible_user in your playbook.
2. Tag every host with suitable login_name
You can create a tag (e.g. login_name) for every ec2 host and specify user in it. For Ubuntu hosts – ubuntu, for AWS Linux hosts – ec2-user.
After doing so, you can use user: "{{ec2_tag_login_name}}" in your playbook – this will take username from login_name tag of the host.
3. Patch the ec2.py script for your needs
It seems there is no decent way to get exact platform name from AMI, but you can use something like this:
image_name = getattr(conn.get_image(image_id=getattr(instance,'image_id')),'name')
login_name = 'user'
if 'ubuntu' in image_name:
login_name = 'ubuntu'
elif 'amzn' in image_name:
login_name = 'ec2-user'
setattr(instance, 'image_name', image_name)
setattr(instance, 'login_name', login_name)
Paste this code just before self.add_instance(instance, region) in ec2.py with the same indentation. It fetches image name and do some guess work to define login_name. Then you can use user: "{{ec2_login_name}}" in your playbook.
You can set variables based on EC2 instance tags. If you tag instances with the distro name then you can set Ansible's ssh username for each distro via group_vars files.
Example group_vars file for Ubuntu relative to your playbook: group_vars/tag_Distro_Ubuntu.yml
---
ansible_user: ubuntu
Any instances tagged Distro: Ubuntu will connect with the ubuntu user. Create a separate group_vars file per distro tag to accommodate other distros.

How to create dynamic variables in ansible

The real scenario, want to get a resource id of sqs in AWS, which will be returned after the execution of a playbook. So, using this variable in files to configure the application.
Persisting variables from one playbook to another
checking out the documentation, modules like set_fact and register have scope only for that specific host. There are many purpose of using the variables from one host to another.
Alternatives I can think of:
using Command module and echoing the variables to a file. Later, using the variable file using vars section or include.
Setting the env variables and then accessing it but this will be difficult.
So what is the solution?
If you're gathering facts, you can access hostvars via the normal jinja2 + variable lookup:
e.g.
- hosts: serverA.example.org
gather_facts: True
...
tasks:
- set_fact:
taco_tuesday: False
and then, if this has run, on another host:
- hosts: serverB.example.org
...
tasks:
- debug: var="{{ hostvars['serverA.example.org']['ansible_memtotal_mb'] }}"
- debug: var="{{ hostvars['serverA.example.org']['taco_tuesday'] }}"
Keep in mind that if you have multiple Ansible control machines (where you call ansible and ansible-playbook from), you should take advantage of the fact that Ansible can store its facts/variables in a cache (currently Redis and json), that way the control machines are less likely to have different hostvars. With this, you could set your control machines to use a file in a shared folder (which has its risks -- what if two control machines are running on the same host at the same time?), or set/get facts from a Redis server.
For my uses of Amazon data, I prefer to just fetch the resource each time using a tag/metadata lookup. I wrote an Ansible plugin that allows me to do this a little more easily as I prefer this to thinking about hostvars and run ordering (but your mileage may vary).
You can pass variables On The Command Line: http://docs.ansible.com/ansible/playbooks_variables.html#passing-variables-on-the-command-line
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
You can use local connection to run playbook = get variable and apply it to another playbook:
- hosts: 127.0.0.1
connection: local
- shell: ansible-playbook -i ...
register: sqs_id
- shell: ansible-playbook -i ... -e "sqs_id={{sqs_id.stdout}}"
Also delegation might be useful in this scenario:
http://docs.ansible.com/ansible/playbooks_delegation.html#delegation
Also you can store output in the local file and use (http://docs.ansible.com/ansible/playbooks_delegation.html#delegation):
- name: take a sqs id
local_action: command cat ~/sqs_id
PS:
I don't understand why you can't write complex playbook where will be included many roles that will share variables?
You can write "common" variables to a host_vars or group_vars this way all the servers has access to it.
Another way may be to create a custom ansible module/lookup plugin to hide all the boilerplate code and get an easy and flexible access to the variables you need.
I had a similar issue with azure DevOps pipelines.
I created VM:s with terraform, ssh-keys and windows username/password was generated by terraform and stored it in a KeyVault.
So I then needed to query KeyVault before running Ansible on all created VM:s. I ended up using Azure python SDK to get all secrets. I also generate an inventory file and a host_vars folder with a file for each VM.
The actual play-book is now very basic and does the job perfectly. All variables for terraform and ansible is in a json file. And the python script is less than 30 lines.

Having trouble provisioning EC2 instances using Ansible

I'm very confused on how you are supposed to launch EC2 instances using Ansible.
I'm trying to use the ec2.py inventory scripts. I'm not sure which one is supposed to be used, because there is three installed with Ansible:
ansible/lib/ansible/module_utils/ec2.py
ansible/lib/ansible/modules/core/cloud/amazon/ec2.py
ansible/plugins/inventory/ec2.py
I thought running the one in inventory/ would make most sense, so I run it using:
ansible-playbook launch-ec2.yaml -i ec2.py
which gives me:
msg: Either region or ec2_url must be specified
So I add a region (even though I have a vpc_subnet_id specified) and I get:
msg: Region us-east-1e does not seem to be available for aws module boto.ec2. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path
I'm thinking Amazon must have recently changed ec2 so you need to use a VPC? Even when I try and launch an instance from Amazon's console, the option for "EC2 Classic" is disabled.
When I try and use the ec2.py script in cloud/amazon/ I get:
ERROR: Inventory script (/software/ansible/lib/ansible/modules/core/cloud/amazon/ec2.py) had an execution error:
There are no more details than this.
After some searching, I see that ec2.py module in /module_utils has been changed so a region doesn't need to be specified. I try to run this file but get:
ERROR: The file /software/ansible/lib/ansible/module_utils/ec2.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this with chmod -x /software/ansible/lib/ansible/module_utils/ec2.py.
So as the error suggests, I remove the executable permissions for the ec2.py file, but then get the following error:
ERROR: /software/ansible/lib/ansible/module_utils/ec2.py:30: Invalid ini entry: distutils.version - need more than 1 value to unpack
Does anyone have any ideas on how to get this working? What is the correct file to be using? I'm completely lost at this point on how to get this working.
There are several questions in your post. I'll try to summarise them in three items:
Is it still possible to launch instances in EC2 Classic (no VPC)?
How do I create a new EC2 instance using Ansible?
How to launch the dynamic inventory file ec2.py?
1. EC2 Classic
Your options will differ depending on when did you create your AWS account, the type of instance and the AMI virtualisation type used. Refs: aws account,instance type.
If none of the above parameters restricts the usage of EC2 classic you should be able to create a new instance without defining any VPC.
2. Create a new EC2 instance with Ansible
Since your instance doesn't exist yet a dynamic inventory file (ec2.py) is useless. Try to instruct ansible to run on your local machine instead.
Create a new inventory file, e.g. new_hosts with the following contents:
[localhost]
127.0.0.1
Then your playbook, e.g. create_instance.yml should use a local connection and hosts: localhost. See an example below:
--- # Create ec2 instance playbook
- hosts: localhost
connection: local
gather_facts: false
vars_prompt:
inst_name: "What's the name of the instance?"
vars:
keypair: "your_keypair"
instance_type: "m1.small"
image: "ami-xxxyyyy"
group: "your_group"
region: "us-west-2"
tasks:
- name: make one instance
ec2: image={{ image }}
instance_type={{ instance_type }}
keypair={{ keypair }}
instance_tags='{"Name":"{{ inst_name }}"}'
region={{ region }}
group={{ group }}
wait=true
register: ec2_info
- name: Add instances to host group
add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2_info.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2_info.instances
This play will create an EC2 instance and it will register its public IP as an ansible host variable ec2hosts ie. as if you had defined it in the inventory file. This is useful if you want to provision the instance just created, just add a new play with hosts: ec2hosts.
Ultimately, launch ansible as follows:
export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=<your aws access key here>
export AWS_SECRET_KEY=<your aws secret key here>
ansible-playbook -i new_hosts create_instance.yml
The purpose of the environment variable ANSIBLE_HOST_KEY_CHECKING=false is to avoid being prompted to add the ssh host key when connecting to the instance.
Note: boto needs to be installed on the machine that runs the above ansible command.
3. Use ansible's ec2 dynamic inventory
EC2 dynamic inventory is comprised of 2 files, ec2.py and ec2.ini. In your particular case, I believe that your issue is due to the fact that ec2.py is unable to locate ec2.ini file.
To solve your issue, copy ec2.py and ec2.ini to the same folder in the machine where you intend to run ansible, e.g. to /etc/ansible/.
Pre Ansible 2.0 release (change the branch accordingly).
cd /etc/ansible
wget https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible/ansible/stabe-1.9/plugins/inventory/ec2.ini
chmod u+x ec2.py
For Ansible 2:
cd /etc/ansible
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini
chmod u+x ec2.py
Configure ec2.ini and run ec2.py, which should print an ini formatted list of hosts to stdout.

Ansible ec2 only provision required servers

I've got a basic Ansible playbook like so:
---
- name: Provision ec2 servers
hosts: 127.0.0.1
connection: local
roles:
- aws
- name: Configure {{ application_name }} servers
hosts: webservers
sudo: yes
sudo_user: root
remote_user: ubuntu
vars:
- setup_git_repo: no
- update_apt_cache: yes
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
roles:
- common
- db
- memcached
- web
with the following inventory:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
The Provision ec2 servers task does what you'd expect. It creates an ec2 instance; it also creates a host group [webservers] and adds the created instance IP to it.
The Configure {{ application_name }} servers step then configures that server, installing everything I need.
So far so good, this all does exactly what I want and everything seems to work.
Here's where I'm stuck. I want to be able to fire up an ec2 instance for different roles. Ideally I'd create a dbserver, a webserver and maybe a memcached server. I'd like to be able to deploy any part(s) of this infrastructure in isolation, e.g. create and provision just the db servers
The only ways I can think of to make this work... well, they don't work.
I tried simply declaring the host groups without hosts in the inventory:
[webservers]
[dbservers]
[memcachedservers]
but that's a syntax error.
I would be okay with explicitly provisioning each server and declaring the host group it is for, like so:
- name: Provision webservers
hosts: webservers
connection: local
roles:
- aws
- name: Provision dbservers
hosts: dbservers
connection: local
roles:
- aws
- name: Provision memcachedservers
hosts: memcachedservers
connection: local
roles:
- aws
but those groups don't exist until after the respective step is complete, so I don't think that will work either.
I've seen lots about dynamic inventories, but I haven't been able to understand how that would help me. I've also looked through countless examples of ansible ec2 provisioning projects, they are all invariably either provisioning pre-existing ec2 instances, or just create a single instance and install everything on it.
In the end I realised it made much more sense to just separate the different parts of the stack into separate playbooks, with a full-stack playbook that called each of them.
My remote hosts file stayed largely the same as above. An example of one of the playbooks for a specific part of the stack is:
---
- name: Provision ec2 apiservers
hosts: apiservers #important bit
connection: local #important bit
vars:
- host_group: apiservers
- security_group: blah
roles:
- aws
- name: Configure {{ application_name }} apiservers
hosts: apiservers:!127.0.0.1 #important bit
sudo: yes
sudo_user: root
remote_user: ubuntu
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
vars:
- setup_git_repo: no
- update_apt_cache: yes
roles:
- common
- db
- memcached
- web
This means that the first step of each layer's play adds a new host to the apiservers group, with the second step (Configure ... apiservers) then being able to exclude the localhost without getting a no hosts matching error.
The wrapping playbook is dead simple, just:
---
- name: Deploy all the {{ application_name }} things!
hosts: all
- include: webservers.yml
- include: apiservers.yml
I'm very much a beginner w/regards to ansible, so please do take this for what it is, some guy's attempt to find something that works. There may be better options and this could violate best practice all over the place.
ec2_module supports an "exact_count" property, not just a "count" property.
It will create (or terminate!) instances that match specified tags ("instance_tags")

Resources