Terraform module provided by ansible works well for creating aws resources with S3 backend config for statefile.
but not able to get the terraform plan output using this module.
We want the output should list something like:
Plan: 1 to add, 0 to change, 0 to destroy.
and give details of the resources to be created/destroyed/changed
Have tried below task in ansible, but not able to generate output as expected.
below is ansible task for creating plan:
- name: "create file"
shell: "touch {{playbook_dir}}/tfplan && ls -larth ../terraform/{{role_name}} "
- name: "Run terraform project with plan file"
terraform:
state: planned
backend_config:
bucket: "{{bootstrap_prefix}}-{{aws_account_type}}-{{caller_facts.account}}"
region: "{{ bootstrap_aws_region }}"
kms_key_id: "{{ kms_id.stdout }}"
encrypt: true
workspace_key_prefix: "{{ app_parent }}-{{ app_name }}"
key: "terraform.tfstate"
force_init: true
project_path: "../terraform/{{role_name}}"
plan_file: "{{playbook_dir}}/tfplan"
variables:
app_name: "{{ app_name }}"
workspace: "{{ app_env }}"
Output of above ansible task:
ok: [localhost] => {
"changed": false,
"command": "/usr/local/bin/terraform -lock=true /root/project/ansible/tfplan",
"invocation": {
"module_args": {
"backend_config": {
"bucket": "XXXXXXXX2440728499",
"encrypt": true,
"key": "terraform.tfstate",
"kms_key_id": "XXXXXXXX",
"region": "XXXXXXXX",
"workspace_key_prefix": "XXXXXX"
},
"binary_path": null,
"force_init": true,
"lock": true,
"lock_timeout": null,
"plan_file": "/root/project/ansible/tfplan",
"project_path": "../terraform/applications",
"purge_workspace": false,
"state": "planned",
"state_file": null,
"targets": [],
"variables": {
"app_name": "application"
},
"variables_file": null,
"workspace": "uat"
}
},
"outputs": {},
"state": "planned",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"workspace": "uat"
}
It works fine with state: present (terraform apply) , but want it to work with state:planned (terraform plan)
In current ansible documentation: To just run a terraform plan, use check mode.
In addition, you should add to terraform module parameters:
- name: "Run terraform project with plan file"
terraform:
state: planned
check_mode: true
Related
I am create usernames with the iam module, and I am using the access_key_state: create option.
However, I want my playbook to output the Access Key and the Secret Access Key for each user.
playbook.yml:
---
- name: "Starting the tasks: Creates IAM Policy, group, Role and User"
hosts: localhost
connection: local
gather_facts: False
vars_files:
- vars/aws-credentials.yml
tasks:
- include: tasks/create-user.yml
tags: user
- include: tasks/create-group.yml
tags: group
tasks/create-user.yml:
---
# Create the IAM users with Console and API access
- name: Create new IAM users with API keys and console access
iam:
iam_type: user
name: "{{ item }}"
state: present
password: "{{ lookup('password', 'passwordfile chars=ascii_letters') }}"
access_key_state: create
update_password: on_create
no_log: true
register: newusers
loop:
- johna
- mariab
- carlosc
- name: test
debug:
msg: "{{ credentials.results }}"
The debug message "{{ credentials.results }}" gives me the Access Key, but not the Secret Access Key:
{
"ansible_loop_var": "item",
"changed": true,
"created_keys": [],
"failed": false,
"groups": null,
"invocation": {
"module_args": {
"access_key_ids": null,
"access_key_state": "create",
"aws_access_key": null,
"aws_secret_key": null,
"debug_botocore_endpoint_logs": false,
"ec2_url": null,
"groups": null,
"iam_type": "user",
"key_count": 1,
"name": "carol.v",
"new_name": null,
"new_path": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"path": "/",
"profile": null,
"region": null,
"security_token": null,
"state": "present",
"trust_policy": null,
"trust_policy_filepath": null,
"update_password": "always",
"validate_certs": true
}
},
"item": "carlosc",
"keys": {
"AK_________FV": "Active"
},
"user_meta": {
"access_keys": [
{
"access_key_id": "AK_________FV",
"status": "Active"
}
]
},
"user_name": "carlosc"
}
How to get the Secret Access Key for each user?
Update 09 May 2020: For further reference.
Bad news; it appears they are purposefully throwing the secret_access_key in the trash: https://github.com/ansible/ansible/blob/v2.9.7/lib/ansible/modules/cloud/amazon/iam.py#L238-L241
It appears the only way around that is to set key_count: 0 in your iam: and then use awscli or a custom ansible module to make that same iam.create_access_key call and preserve the result
- name: create access key for {{ item }}
command: aws iam create-access-key --user-name {{ item }}
environment:
AWS_REGION: '{{ the_region_goes_here }}'
AWS_ACCESS_KEY_ID: '{{ whatever_you_called_your_access_key }}'
AWS_SECRET_ACCESS_KEY: '{{ your_aws_secret_access_key_name_here }}'
register: user_keys
with_items:
- johna
- mariab
- carlosc
Feel free to file an issue, although you'll likely have to file it against the new amazon.aws collection since that iam.py is no longer present in the devel branch
You can use community.aws.iam:
- name: Create IAM User with API keys
community.aws.iam:
iam_type: user
name: some_dummy_user
state: present
access_key_state: create
register: new_user
- debug:
var: new_user
You'll be able to get your access and secret keys at:
new_user.user_meta.access_keys[0].access_key_id
new_user.user_meta.access_keys[0].secret_access_key
I have mine getting loaded into Secrets Manager and will eventually have them rotated with a lambda function.
I'm getting below wired error when trying to add disk to RHVM. I have checked in the documentation. all parameters seems legit to me. I need extra eye to valiate this case. thank you in-advanced
error msg as follows
"msg": "Unsupported parameters for (ovirt_disk) module: activate Supported parameters include: auth, bootable, description, download_image_path, fetch_nested, force, format, id, image_provider, interface, logical_unit, name, nested_attributes, openstack_volume_type, poll_interval, profile, quota_id, shareable, size, sparse, sparsify, state, storage_domain, storage_domains, timeout, upload_image_path, vm_id, vm_name, wait"
}
Parameters in the role as follows
"module_args": {
"vm_name": "Jxyxyxyxy01",
"activate": true,
"storage_domain": "Data-xxx-Txxx",
"description": "Created using Jira ticket CR-329",
"format": "cow",
"auth": {
"timeout": 0,
"url": "https://xxxxxx.com/ovirt-engine/api",
"insecure": true,
"kerberos": false,
"compress": true,
"headers": null,
"token": "xxcddsvsdvdsvsdvdEFl0910KES84qL8Ff5NReA",
"ca_file": null
},
"state": "present",
"sparse": true,
"interface": "virtio_scsi",
"wait": true,
"size": "20GiB",
"name": "Jxyxyxyxy01_123"
}
Playbook as follows.
- name: Create New Disk size of {{ disk_size }} on {{ hostname }} using storage domain {{ vm_storage_domain }}
ovirt_disk:
auth: "{{ ovirt_auth }}"
description: "Created using Jira ticket {{ issueKey }}"
storage_domain: "{{ vm_storage_domain }}"
name: "{{ hostname }}_123" # name of the disk
vm_name: "{{ hostname }}" #name of the virtual machine
interface: "virtio_scsi"
size: "{{ disk_size }}GiB"
sparse: yes
format: cow
activate: yes
wait: yes
state: present
register: vm_disk_results
The activate parameter was added in ansible 2.8.
Upgrade your ansible installation or drop that parameter.
I am using the postgresql_db module that is part of Ansible. For example with something like
- name: Database
postgresql_db:
name: "{{ vars[item + '_database_name_version'] }}"
login_host: "{{ vars[item + '_database_host'] }}"
login_password: "{{ vars[item + '_database_admin_password'] }}"
login_user: "{{ vars[item + '_database_admin_username'] }}"
port: "{{ vars[item + '_database_port'] }}"
state: restore
target: "{{ backup_restore[item]['db_tar'] }}"
when: backup_restore[item]['db_tar'] is defined
with_items: '{{ backup_restore }}'
register: db_restore
When I debug output db_restore I see
TASK [backup : db_restore] *****************************************************
ok: [myapp] => {
"db_restore": {
"changed": true,
"msg": "All items completed",
"results": [
{
"ansible_loop_var": "item",
"changed": true,
"cmd": "cmd: ****",
"failed": false,
"invocation": {
"module_args": {
"ca_cert": null,
"conn_limit": "",
"db": "myapp_0_1_0",
"encoding": "",
"lc_collate": "",
"lc_ctype": "",
"login_host": "1.1.1.2",
"login_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"login_unix_socket": "",
"login_user": "ansible",
"maintenance_db": "postgres",
"name": "myapp_0_1_0",
"owner": "",
"port": 5432,
"session_role": null,
"ssl_mode": "prefer",
"state": "restore",
"target": "/backup/tmp/myapp-myapp/myapp_daily/databases/PostgreSQL.sql.gz",
"target_opts": "",
"template": ""
}
},
"item": "myapp",
"msg": "",
"rc": 0,
"stderr": "ERROR: relation \"myapp_table\" already exists\n",
"stderr_lines": [
"ERROR: relation \"myapp_table\" already exists"
]
}
]
}
}
Although there is an error during execution of this module as visible in stderr, the Ansible postgresql_db module also returns "failed": false. So it looks like it is ignoring any errors that might occur while running commands to restore the database.
Now I want to add a task to check db_restore for stderr attribute and if present, raise an error so that the user is made aware of the problems.
How can I raise an error? Is there an error module?
Yes, there is a module to raise errors which is called fail (https://docs.ansible.com/ansible/latest/modules/fail_module.html)
# Example playbook using fail and when together
- fail:
msg: The system may not be provisioned according to the CMDB status.
when: cmdb_status != "to-be-staged"
Another possible way is to use failed_when.
- name: Fail task when the command error output prints FAILED
command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"
I would recommend you to read the ansible page on error handling (https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html)
I strongly agree with #Stefan Wegener's answer, but want to add another possibility.
I'm curious if your problem only occurs because of the loop you wrapped around your task.
Another possible solution would be to add the any_errors_fatal: true clause on play level like described in the Ansible documentation:
- hosts: somehosts
any_errors_fatal: true
roles:
- myrole
See https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html#aborting-the-play
for reference.
I want to provision with Ansible more than one EC2 instance but don't really know how can I correctly assign a different instance tag name.
I tried with:
---
....
instance_tags:
Name: tag-{{ item }}
register: ec2
with_items:
- 1
- 2
But then when I want to check if ssh is open:
- name: Check ssh port to be open
wait_for:
host: "{{ item.public_ip }}"
port: 22
delay: 60
timeout: 240
with_items: "{{ ec2.instances }}"
I receive this error:
'dict object' has no attribute 'instances'
Is there a possibility to resolve this issue?
I use Ansible version 2.4.
Please learn how output is registered when running module in a loop: Using register with a loop.
ec2 in your example is a list, not a dictionary, so ec2.instances does not exist.
Use debug module to display actual variable values, pay attention to [ ] and { }, and fix your code appropriately.
if you're using the ec2 module from ansible then you will see that ec2.instances does exist but it's an array of dictionaries. With each entry being the instances that you provisioned so you have to access the members from them.
ok: [localhost] => {
"ec2": {
"changed": true,
"failed": false,
"instance_ids": [
"i-020dfd4ea1872f"
],
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": {
"/dev/xvda": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-00415be2e41d58564"
}
},
"dns_name": "",
"ebs_optimized": false,
"groups": {
"sg-fcef86": "default"
},
"hypervisor": "xen",
"id": "i-020dfd4ea182f",
"image_id": "ami-e6899e",
"instance_type": "t2.micro",
"kernel": null,
"key_name": "st",
"launch_time": "2017-10-23T21:28:06.000Z",
"placement": "us-west-2b",
"private_dns_name": "ip-10-",
"private_ip": "10.212",
"public_dns_name": "",
"public_ip": null,
"ramdisk": null,
"region": "us-west-2",
"root_device_name": "/dev/xvda",
"root_device_type": "ebs",
"state": "running",
"state_code": 16,
"tags": {},
"tenancy": "default",
"virtualization_type": "hvm"
}
],
"tagged_instances": []
}
}
Your with_items would work:
instance_tags:
Name: "{{ item }}"
with_items:
- Machine1
- Machine2
But going back to why your ec2.instances doens't exist it's because of what I responded to up top
make sure to do a:
- debug: var=ec2
To view your variable
Here is my playbook
- name: Add multiple users
user:
name: "{{ item[0].name }}"
comment: "{{ item[0].comment }}"
uid: "{{ item[0].uid }}"
groups: "{{ item[0].groups}}"
shell: /bin/bash
with_nested:
- "{{ name }}"
- "{{ comment }}"
- "{{ uid }}"
- "{{ groups }}"
Here is my vars file
---
name:
- test1
- test2
comment:
- "comment1"
- "comment2"
uid:
- 150
- 151
groups: "sudo, admin"
I'm not sure what is causing this, any ideas? I believe I may need to use with subelement instead of with nested? Am I on the right track there?
UPDATE:
Changed my code but am now experiencing the following. Updated code and error message
- name: Add new group if it doesn't exist already
group:
name: "{{ group }}"
when: group is defined
- name: Add multiple users
user:
name: "{{ item.0 }}"
comment: "{{item.1 }}"
uid: "{{ item.2 }}"
group: "{{ group }}"
groups: "{{ groups }}"
append: yes
with_together:
- "{{ name }}"
- "{{ comment }}"
- "{{ uid }}"
- "{{ group }}"
And variable file:
name:
- test1
- test2
comment:
- "comment1"
- "comment2"
uid:
- 150
- 151
group: sudo
groups:
- admin
- test
However, now I am receiving this error.
failed: [127.0.0.1] => (item=[u'test1', u'comment1', 150, u'sudo']) => {"failed": true, "invocation": {"module_args": {"append": true, "comment": "comment1", "createhome": true, "expires": null, "force": false, "generate_ssh_key": null, "group": "sudo", "groups": "{'ungrouped': ['127.0.0.1'], 'all': ['127.0.0.1']}", "home": null, "login_class": null, "move_home": false, "name": "test1", "non_unique": false, "password": null, "remove": false, "shell": null, "skeleton": null, "ssh_key_bits": "2048", "ssh_key_comment": "ansible-generated on ubuntu-512mb-sfo1-01", "ssh_key_file": null, "ssh_key_passphrase": null, "ssh_key_type": "rsa", "state": "present", "system": false, "uid": "150", "update_password": "always"}, "module_name": "user"}, "item": ["test1", "comment1", 150, "sudo"], "msg": "Group 'all': ['127.0.0.1']} does not exist"}
failed: [127.0.0.1] => (item=[u'test2', u'comment2', 151, None]) => {"failed": true, "invocation": {"module_args": {"append": true, "comment": "comment2", "createhome": true, "expires": null, "force": false, "generate_ssh_key": null, "group": "sudo", "groups": "{'ungrouped': ['127.0.0.1'], 'all': ['127.0.0.1']}", "home": null, "login_class": null, "move_home": false, "name": "test2", "non_unique": false, "password": null, "remove": false, "shell": null, "skeleton": null, "ssh_key_bits": "2048", "ssh_key_comment": "ansible-generated on ubuntu-512mb-sfo1-01", "ssh_key_file": null, "ssh_key_passphrase": null, "ssh_key_type": "rsa", "state": "present", "system": false, "uid": "151", "update_password": "always"}, "module_name": "user"}, "item": ["test2", "comment2", 151, null], "msg": "Group 'all': ['127.0.0.1']} does not exist"}
The problem is conflicting variable names. groups is a reserved variable and holds the groups from the inventory. And all is a automatically generated group which holds all the hosts of your inventory.
From the docs:
Even if you didn’t define them yourself, Ansible provides a few variables for you automatically. The most important of these are hostvars, group_names, and groups. Users should not use these names themselves as they are reserved. environment is also reserved.
and
groups is a list of all the groups (and hosts) in the inventory. This can be used to enumerate all hosts within a group.
Simply rename your variable and it should work. In general it's a good idea to prefix all variables of a role with the role name. This gets more important if you use 3rd party roles, e.g. from Ansible Galaxy, just to avoid conflicts. So instead of groups you could use myrole_groups and can be quite sure there never will be conflicts.