Ansible aws_ec2 plugin how to set credentials for discovered hosts? - amazon-ec2

Need to launch ad-hook commands like "-m ping" on existing ec2 instances, but it requred key-pair.
How to set key-pair for boto, like "aws_access_key_id" stored in ~/.aws/credentials ?
Also, have a problem invertory:
i got "invertory" folder near Ansible, where stored both local hosts and aws_ec2.yml file. But ansible-invertory --list works only for aws_ec2.yml file...

You can declare appropriate credentials and other host-sensitive variable right in the inventory file.
I.e.:
[ec2fleet]
35.... ansible_ssh_user=ec2-user
15.... ansible_ssh_user=ubuntu
[ec2fleet:vars]
ansible_user=deployer
ansible_ssh_private_key_file=/home/deployer/.ssh/deployer.pem

Related

Ansible: Configure global variable which "environments" to set up

I want to be able to setup via ansible different "environments" (sets of same variables), e.g. production, testing, staging, etc.
I would like to:
define which environments to set up (setup_environments) at some global place
run a role r1 once for each environment in setup_environments
run a role r2 without setting its environment, but r2 should have access both to the variable setup_environments AND be able to load the defined environments for certain tasks.
Is this possible? 3. should be possible with include_vars, but I mention it to show that I really need the "setup_environments" variable and can't simply run ansible multiple times including each environment separately with -i.
Reading your requirement it seems that you need to structure your inventory only. Let's assume your inventory looks like
inventory.ini
[environment:children]
prod
qa
dev
webserver
database
[environment:vars]
# No global variables here / yet
[prod:children]
webserver_prod
database_prod
[qa:children]
webserver_qa
database_qa
[dev:children]
webserver_dev
database_dev
# Logical services nodes
[webserver:children]
webserver_prod
webserver_qa
webserver_dev
[database:children]
database_prod
database_qa
database_dev
# List of all hosts
# [prod:children]
[webserver_prod]
web.prod.example.com
[database_prod]
primary.prod.example.com
secondary.prod.example.com
# [qa:children]
[webserver_qa]
web.qa.example.com
[database_qa]
primary.qa.example.com
secondary.qa.example.com
# [dev:children]
[webserver_dev]
web.dev.example.com
[database_dev]
primary.dev.example.com
secondary.dev.example.com
you may get familar with the structure than by
ansible-inventory dev --graph
ansible-inventory database_dev --graph
...
It will be possible to apply playbooks, tasks, etc. just to certain nodes via
ansible-playbook database_dev ...
Furthermore you should introduce group_vars like
all
prod
qa
dev
webserver_prod
database_prod
...
All depdends on the structure of the inventory but makes it easier afterwards.

Why playbook take wrong values for group variables?

I have a problem with groups variables.
Example: I have two inventory groups group_A and group_B, and also have the same name files in group_vars:
inventories/
hosts.inv
[group_A]
server1
server2
[group_B]
server3
server4
group_vars/
group_A - file
var_port: 9001
group_B - file
var_port: 9002
The problem is when i execute:
ansible-playbook playbooks/playbook.yml -i inventories/hosts.inv -l group_B
playbook was executed for proper scope of servers (server3, server4) but it takes variables from group variables file group_A.
expected result: var_port: 9002
in realty : var_port: 9001
ansible 2.4.2.0
BR Oleg
I included ANSIBLE_DEBUG , and what i have found:
2018-05-03 15:23:23,663 p=129458 u=user | 129458 1525353803.66336: Loading data from /ansible/inventories/prod/group_vars/group_B.yml
2018-05-03 15:23:23,663 p=129458 u=user | 129661 1525353803.66060: in run() - task 00505680-eccc-d94e-2b1b-0000000000f4
2018-05-03 15:23:23,664 p=129458 u=user | 129661 1525353803.66458: calling self._execute()
2018-05-03 15:23:23,665 p=129458 u=user | 129458 1525353803.66589: Loading data from /ansible/inventories/prod/group_vars/group_A.yml
on playbook execution ansible scan all files with variables in folder group_vars which have variable "var_port", last will win.....
as you can found in another topic:
Ansible servers/groups in development/production
and from documentation:
http://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
Note
Within any section, redefining a var will overwrite the previous instance. If multiple groups have the same variable, **the last one loaded wins**. If you define a variable twice in a play’s vars: section, the **2nd one wins**.
For me now NOT clear how to manage configuration files. In this case I must use unique variables names for each group, but it is not possible regarding roles, or should I use include_vars when i call playbook?
Super example how to manage variables files in multistage environment from DigitalOcean
How to Manage Multistage Environments with Ansible
I believe that the problem here, while not explicitly stated in the original question, is that Server{1,2} and Server{3,4} are actually the same servers in 2 different groups at the same level.
I ran into this problem which caused me to do some digging. I don't agree with it, but it is as designed. This was even fixed with full compatibility and the pull request was rejected
Discussion
Pull Request

Ansible using common user and pass to hosts

I've a hosts file with 4 host ip's and I'm always using "ansible_connection=ssh ansible_ssh_user=user ansible_ssh_pass=pass" besides host ip to verify the connection.
But, this is difficult to add these many times. Could someone please tell me where can I keep common these parameters and pass it to all my host ip's at a time?
Create file in directory all in directory group_vars with the content.
ansible_connection:ssh
ansible_ssh_user:user
ansible_ssh_pass:pass
It should work.

Run Ansible playbook on UNIQUE user/host combination

I've been trying to implement Ansible in our team to manage different kinds of application things such as configuration files for products and applications, the distribution of maintenance scripts, ...
We don't like to work with "hostnames" in our team because we have 300+ of them with meaningless names. Therefor, I started out creating aliases for them in the Ansible hosts file like:
[bpm-i]
bpm-app1-i1 ansible_user=bpmadmin ansible_host=el1001.bc
bpm-app1-i2 ansible_user=bpmadmin ansible_host=el1003.bc
[bpm-u]
bpm-app1-u1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-u2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-all:children]
bpm-i
bpm-u
Meaning we have a BPM application named "app1" and it's deployed on two hosts in integration-testing and on two hosts in user-acceptance-testing. So far so good. Now I can run an Ansible playbook to (for example) setup the SSH accesses (authorized_keys) for team members or push a maintenance script. I can run those PBs on each host seperately, on all hosts ITT or UAT or even everywhere.
But, typically, we'll have install the same application app1 again on an existing host but with a different purpose - say "training" environment. My reflex would be to do this:
[bpm-i]
bpm-app1-i1 ansible_user=bpmadmin ansible_host=el1001.bc
bpm-app1-i2 ansible_user=bpmadmin ansible_host=el1003.bc
[bpm-u]
bpm-app1-u1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-u2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-t]
bpm-app1-t1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-t2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-all:children]
bpm-i
bpm-u
bpm-t
But ... running PB's becomes a mess now and cause errors. Logically I have two alias names to reach the same user/host combination : bpm-app1-u1 and bpm-app1-t1. I don't mind, that's perfectly logical, but if I were to test a new maintenance script, I would first push it to bpm-app1-i1 for testing and when ok, I probably would run the PB against bpm-all. But because of the non-unique user/host combinations for some aliases the PB would run multiple times on the same user/host. Depending on the actions in the PB this may work coincidentally, but it may also fail horribly.
Is there no way to tell Ansible "Run on ALL - UNIQUE user/host combinations" ?
Since most tasks change something on the remote host, you could use Conditionals to check for that change on the host before running.
For example, if your playbook has a task to run a script that creates a file on the remote host, you can add a when clause to "skip the task if file exists" and check for the existence of that file with a stat task before that one.
- Check whether script has run in previous instance by looking for file
stat: path=/path/to/something
register: something
- name: Run Script when file above does not exist
command: bash myscript.sh
when: not something.exists

Ansible with multiple SSH key pair

I am new to Ansible. I am able to test it and its working fine with my test requirment. For making connection between management node and the client node I am using already created ssh key pair. How can I use another node with different SSH key pair? For reference I am considering 3 ec2-instance with different key pairs.
Good news- in a basic use case, this is fairly easy. Simply use the ansible_ssh_private_key_file parameter in your Ansible inventory.
Here are some examples purloined from my personal file:
$ cat hosts.ini
[server1]
54.1.2.3 ansible_ssh_private_key_file=~/.ssh/server1.pem
[testservers]
ec2-54-2-3-4.compute-1.amazonaws.com ansible_ssh_private_key_file=~/.ssh/aws-testserver.pem ansible_ssh_user=ubuntu
ec2-54-2-3-5.compute-1.amazonaws.com ansible_ssh_private_key_file=~/.ssh/aws-testserver.pem ansible_ssh_user=ubuntu
[piwall]
10.0.0.88 ansible_ssh_private_key_file=~/.ssh/raspberrypi.pem ansible_ssh_user=pi
tedder42 is correct, however, there is a better way of doing it.
See ansible_ssh_private_key_file here.
I have in my host files the following
# SSH Keys configuration
[all_servers:vars]
ansible_ssh_private_key_file = <YOUR PRIVATE KEY LOCATION>
# Server configuration
[all_servers:children]
elastic_servers
nginx_servers
[elastic_servers]
44.22.11.22
44.55.66.77
22.11.22.33
[nginx_servers]
22.24.123.123
233.111.222.11
If you have multiple keys configuration, you can do something like the following
[nginx:vars]
ansible_ssh_private_key_file = <YOUR PRIVATE KEY LOCATION>
[app:vars]
ansible_ssh_private_key_file = <YOUR 2nd PRIVATE KEY LOCATION>
[nginx:children]
nginx_servers
[app:children]
app_servers
[nginx_servers]
1.2.3.4
[app_servers]
5.5.5.5
6.6.6.6
That's way cleaner than tedder42 answer. This is useful if you have multiple keys for multiple servers.
Otherwise, you can include your key in ansible.cfg file instead.

Resources