How does Ansible find group_vars when working with Terraform - ansible

I'm setting up a project using terraform and Ansible. After Terraform creates all the needed resources, I need to provision the servers using Ansible, where I am using the following:
# provisioner "local-exec" {
# environment = {
# ANSIBLE_SSH_RETRIES = "30"
# ANSIBLE_HOST_KEY_CHECKING = "False"
# }
# command = "ansible-playbook ${var.ec2_ansible_playbook} --extra-vars env_name=${var.env} -u ubuntu -i ${aws_eip.main.0.public_ip},"
# }
Ansible fails to find my group_vars, How do I point Ansible to the correct path where group_vars are located? Previously, I was only using Ansible and I would run something like
ansible-playbook provision.yml -i inventory/prod/hosts
Ansible wouldn't have any problems finding group_vars since they were sitting at the same location as the hosts. Any help is appreciated.

You can specify the working-dir like mentioned in the documentation
I think per default it uses the module directory as working directory.

Related

How to Show Ansible Host Vars and Inventory Vars

Is there a command in Ansible in which I can see all the variables for a host. such as variables from the inventory and also from the host and group vars.
Ive tried ansible-inventory but that just provided inventory vars. The other options all seem to be printing via a playbook.
EDIT:
Ive tried the following.
ansible -m debug -i ../inventory.yaml -a "var=hostvars[inventory_hostname]" "spine1-nxos"
But this only works when Im within the folder of my host vars therefore ... the final questions are: how do I get the group vars and how do I provide it to the folder with the host and group vars.

Ansible Inventory Link

I am new to Ansible and trying to learn the basics. But apparently I already fail with setting up the inventory file.
For the setup:
1) Installed ansible via homebrew
2) as no ansible.cfg was created, I created one manually in /etc/ansible/ansible.cfg
ansible.cfg
[defaults]
inventory = /etc/ansible/hosts/;
3) a hosts file was also not there, so I created the same in /etc/ansible/hosts
hosts
Test1
Test2
When I run ansible all --list-hosts I get the error:
[WARNING]: Unable to parse /etc/ansible/hosts; as an inventory source
As the path is correctly reflected in the error, I at least assume, the cfg is read correctly. But still the target file hosts is not being recognized. I tried differennt paths. What do I need to change?
remove /; from the end of the inventory-line in /etc/ansible/ansible.cfg:
cat /etc/ansible/ansible.cfg
[defaults]
inventory = /etc/ansible/hosts
you could use ansible -i /etc/ansible/hosts to tell ansible use this inventory file.

Multiple Inventories Not Working - No Hosts Found

I'm using the Ansible EC2 dynamic inventory script to access my hosts in EC2. The hosts I am looking for have a tag named sisyphus_environment with a value of development and another tag named sisyphus_project with a value of reddot, so I need the intersection of these tags.
However, I can't even seem to get filtering on one tag working properly with the EC2 inventory.
I can clearly find the instances in question using the AWS CLI:
$ aws ec2 describe-instances --filters \
Name=tag:sisyphus_environment,Values=development \
Name=tag:sisyphus_project,Values=reddot \
Name=tag:sisyphus_role,Values=nginx | \
jq '.Reservations[].Instances[] | { InstanceId: .InstanceId }'
{
"InstanceId": "i-deadbeef"
}
{
"InstanceId": "i-cafebabe"
}
I'm attempting to do the same thing with Ansible expressed as a static group consisting of dynamic members filtered by these tags.
My ansible.cfg:
[defaults]
retry_files_enabled = false
roles_path = roles:galaxy_roles
inventory = inventory/
[ssh_connection]
ssh_args = -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
My EC2 inventory is directly sourced from upstream along with the ec2.ini.
My inventory directory looks like this:
$ tree inventory/
inventory/
├── ec2.ini
├── ec2.py
└── inventory.ini
0 directories, 3 files
I had to name inventory with an .ini extension, otherwise Ansible was trying to load it as YAML and failing.
My inventory/inventory.ini file looks like this:
[tag_sisyphus_project_reddot]
# blank
[reddot:children]
tag_sisyphus_project_reddot
This was done similar to Ansible's documentation on the matter. Unfortunately, this simple thing fails:
$ inventory/ec2.py --refresh-cache
...
$ ansible reddot -m command -a 'echo true'
[WARNING]: No hosts matched, nothing to do
If I query by the tag, I get results:
$ ansible tag_sisyphus_project_reddot -m command -a 'echo true'
10.0.0.1 | SUCCESS | rc=0 >>
true
...
Is there a step I'm missing?
My Ansible version:
$ ansible --version
ansible 2.2.0.0
config file = /home/naftuli/Documents/Development/ansible-reddot/ansible.cfg
configured module search path = Default w/o overrides

How to pass terraform outputs variables into ansible as vars_files?

I am provisioning AWS infrastructure using terraform and want to pass variables such as aws_subnet_id and aws_security_id into ansible playbook using vars_file (don't know if there is any other way though). How can I do that?
I use Terraform local_file to create an Ansible vars_file. I add a tf_ prefix to the variable names to make it clear that they originate in Terraform:
# Export Terraform variable values to an Ansible var_file
resource "local_file" "tf_ansible_vars_file_new" {
content = <<-DOC
# Ansible vars_file containing variable values from Terraform.
# Generated by Terraform mgmt configuration.
tf_environment: ${var.environment}
tf_gitlab_backup_bucket_name: ${aws_s3_bucket.gitlab_backup.bucket}
DOC
filename = "./tf_ansible_vars_file.yml"
}
Run terraform apply to create Ansible var_file tf_ansible_vars_file.yml containing Terraform variable values:
# Ansible vars_file containing variable values from Terraform.
# Generated by Terraform mgmt configuration.
tf_environment: "mgmt"
tf_gitlab_backup_bucket_name: "project-mgmt-gitlab-backup"
Add tf_ansible_vars_file.yml to your Ansible playbook:
vars_files:
- ../terraform/mgmt/tf_ansible_vars_file.yml
Now, in Ansible the variables defined in this file will contain values from Terraform.
Obviously, this means that you must run Terraform before Ansible. But it won't be so obvious to all your Ansible users. Add assertions to your Ansible playbook to help the user figure out what to do if a tf_ variable is missing:
- name: Check mandatory variables imported from Terraform
assert:
that:
- tf_environment is defined
- tf_gitlab_backup_bucket_name is defined
fail_msg: "tf_* variable usually defined in '../terraform/mgmt/tf_ansible_vars_file.yml' is missing"
UPDATE: An earlier version of this answer used a Terraform template. Experience shows that the template file is error prone and adds unnecessarily complexity. So I moved the template file to the content of the local_file.
terraform outputs are an option, or you can just use something like:
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=\"False\" ansible-playbook -u ${var.ssh_user} --private-key=\"~/.ssh/id_rsa\" --extra-vars='{"aws_subnet_id": ${aws_terraform_variable_here}, "aws_security_id": ${aws_terraform_variable_here} }' -i '${azurerm_public_ip.pnic.ip_address},' ansible/deploy-with-ansible.yml"
}
or you can do a sed thing ... as a local provisioner to update the var file..
or you can use terraform outputs.... your preference....
I highly recommend this script. It works well and is maintained by Cisco and will give you more flexibility.
https://github.com/CiscoCloud/terraform.py
Since I want to run both terraform plan and ansible --check in a pull request with a CI, I've decided to go with terraform output.
Basically this is how I run my Ansible now and it gets all outputs from terraform :
With the following output
// output.tf
output "tf_gh_deployment_status_token" {
value = var.GH_DEPLOYMENT_STATUS_TOKEN
sensitive = true
}
Running a bit modified terraform output parsed with jq :
$ terraform output --json | jq 'with_entries(.value |= .value)'
{
"tf_gh_deployment_status_token": "<some token>"
}
This makes it great to run with --extra-args with Ansible :
$ ansible-playbook \
-i ./inventory/production.yaml \
./playbook.yaml \
--extra-vars "$(terraform output --json | jq 'with_entries(.value |= .value)')"
Now I can use {{ tf_gh_deployment_status_token }} anywhere in my playbook.
Use terraform outputs - https://www.terraform.io/intro/getting-started/outputs.html (it is not clear if you are using it already)
Then using command like terraform output ip, you can then use those values in your scripts to generate or populate other files like inventory files or vars_file.
Another option is to use terraform templates and render your files like inventory files from terraform itself and then use it from Ansible.

How to define private SSH keys for servers in dynamic inventories

I ran into a configuration problem when coding an Ansible playbook for SSH private key files. In static Ansible inventories, I can define combinations of host servers, IP addresses, and related SSH private keys - but I have no idea how to define those with dynamic inventories.
For example:
---
- hosts: tag_Name_server1
gather_facts: no
roles:
- role1
- hosts: tag_Name_server2
gather_facts: no
roles:
- roles2
I use the below command to call that playbook:
ansible-playbook test.yml -i ec2.py --private-key ~/.ssh/SSHKEY.pem
My questions are:
How can I define ~/.ssh/SSHKEY.pem in Ansible files rather than on the command line?
Is there a parameter in playbooks (like gather_facts) to define which private keys should be used which hosts?
If there is no way to define private keys in files, what should be called on the command line when different keys are used for different hosts in the same inventory?
TL;DR: Specify key file in group variable file, since 'tag_Name_server1' is a group.
Note: I'm assuming you're using the EC2 external inventory script. If you're using some other dynamic inventory approach, you might need to tweak this solution.
This is an issue I've been struggling with, on and off, for months, and I've finally found a solution, thanks to Brian Coca's suggestion here. The trick is to use Ansible's group variable mechanisms to automatically pass along the correct SSH key file for the machine you're working with.
The EC2 inventory script automatically sets up various groups that you can use to refer to hosts. You're using this in your playbook: in the first play, you're telling Ansible to apply 'role1' to the entire 'tag_Name_server1' group. We want to direct Ansible to use a specific SSH key for any host in the 'tag_Name_server1' group, which is where group variable files come in.
Assuming that your playbook is located in the 'my-playbooks' directory, create files for each group under the 'group_vars' directory:
my-playbooks
|-- test.yml
+-- group_vars
|-- tag_Name_server1.yml
+-- tag_Name_server2.yml
Now, any time you refer to these groups in a playbook, Ansible will check the appropriate files, and load any variables you've defined there.
Within each group var file, we can specify the key file to use for connecting to hosts in the group:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: /path/to/ssh/key/server1.pem
Now, when you run your playbook, it should automatically pick up the right keys!
Using environment vars for portability
I often run playbooks on many different servers (local, remote build server, etc.), so I like to parameterize things. Rather than using a fixed path, I have an environment variable called SSH_KEYDIR that points to the directory where the SSH keys are stored.
In this case, my group vars files look like this, instead:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: "{{ lookup('env','SSH_KEYDIR') }}/server1.pem"
Further Improvements
There's probably a bunch of neat ways this could be improved. For one thing, you still need to manually specify which key to use for each group. Since the EC2 inventory script includes details about the keypair used for each server, there's probably a way to get the key name directly from the script itself. In that case, you could supply the directory the keys are located in (as above), and have it choose the correct keys based on the inventory data.
The best solution I could find for this problem is to specify private key file in ansible.cfg (I usually keep it in the same folder as a playbook):
[defaults]
inventory=ec2.py
vault_password_file = ~/.vault_pass.txt
host_key_checking = False
private_key_file = /Users/eric/.ssh/secret_key_rsa
Though, it still sets private key globally for all hosts in playbook.
Note: You have to specify full path to the key file - ~user/.ssh/some_key_rsa silently ignored.
You can simply define the key to use directly when running the command:
ansible-playbook \
\ # Super verbose output incl. SSH-Details:
-vvvv \
\ # The Server to target: (Keep the trailing comma!)
-i "000.000.0.000," \
\ # Define the key to use:
--private-key=~/.ssh/id_rsa_ansible \
\ # The `env` var is needed if `python` is not available:
-e 'ansible_python_interpreter=/usr/bin/python3' \ # Needed if `python` is not available
\ # Dry–Run:
--check \
deploy.yml
Copy/ Paste:
ansible-playbook -vvvv --private-key=/Users/you/.ssh/your_key deploy.yml
I'm using the following configuration:
#site.yml:
- name: Example play
hosts: all
remote_user: ansible
become: yes
become_method: sudo
vars:
ansible_ssh_private_key_file: "/home/ansible/.ssh/id_rsa"
I had a similar issue and solved it with a patch to ec2.py and adding some configuration parameters to ec2.ini. The patch takes the value of ec2_key_name, prefixes it with the ssh_key_path, and adds the ssh_key_suffix to the end, and writes out ansible_ssh_private_key_file as this value.
The following variables have to be added to ec2.ini in a new 'ssh' section (this is optional if the defaults match your environment):
[ssh]
# Set the path and suffix for the ssh keys
ssh_key_path = ~/.ssh
ssh_key_suffix = .pem
Here is the patch for ec2.py:
204a205,206
> 'ssh_key_path': '~/.ssh',
> 'ssh_key_suffix': '.pem',
422a425,428
> # SSH key setup
> self.ssh_key_path = os.path.expanduser(config.get('ssh', 'ssh_key_path'))
> self.ssh_key_suffix = config.get('ssh', 'ssh_key_suffix')
>
1490a1497
> instance_vars["ansible_ssh_private_key_file"] = os.path.join(self.ssh_key_path, instance_vars["ec2_key_name"] + self.ssh_key_suffix)

Resources