Disable ec2 instance termination protection with Ansible - amazon-ec2

I'm currently enabling termination_protection for all instances created with Ansible to prevent accidental termination from the console etc. Now I want to be able to terminate specific instances with Ansible but I can't figure out how to disable termination protection on them.
This is what I thought would do the trick:
- name: Disable termination protection
ec2:
instance_ids: "{{ instance_ids }}"
region: "{{ aws_region }}"
termination_protection: no
How ever I get this error message when running it:
fatal: [localhost]: FAILED! => {
"changed": false,
"failed": true,
"msg": "image parameter is required for new instance"
}
It looks like Ansible is interpreting my script as an instance creation request.
Is there a way to change termination protection with another module? The only other way I can think of is to use aws cli through a shell task in Ansible but that is a bit hacky.

Let's take a look into the source code.
States running and stopped calls startstop_instances().
State restarted calls restart_instances().
Both this functions honor source_dest_check and termination_protection attributes values.
So you can call:
- ec2:
instance_ids: "{{ instance_ids }}"
state: restarted
region: "{{ aws_region }}"
termination_protection: no
if you don't mind your servers to be restarted.
Or query current states with ec2_remote_facts and call ec2 module with that states as parameter – this will change termination_protection, but will keep instances' states untouched.

Related

How can I disable Cross region replication in oracle cloud using ansible playbook?

I came across a situation where when deleting an image with CRR enabled, you must first disable CRR before you can successfully delete the image. I cannot come with an ansible role or task to do the same.
So far I've come up with this:
- name: Get information of all boot volume attachments in a compartment and availability domain
oci_boot_volume_attachment_facts:
compartment_id: "{{ COMPARTMENT_ID }}"
availability_domain: "{{ Availability_Domain}}"
instance_id: "{{ matching_id_instance }}"
register: boot_volume_data
- name: Update boot_volume
oci_blockstorage_boot_volume:
# required
boot_volume_id: "{{ item.boot_volume_id }}"
boot_volume_replicas: []
- # required
availability_domain: "{{ Availability_Domain}}"
with_items: "{{ boot_volume_data.boot_volume_attachments }}"
when:
- item.instance_id == "{{ matching_id_instance }}"
'm unable to test it because there is difficulty in setting up connectivity from my ubuntu machine to oracle cloud as I don't have required permission to add public key in the oracle cloud for my user. For connectivity I followed this: https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
I also came across a cli command to do so in oracle: https://docs.oracle.com/en-us/iaas/Content/Block/Concepts/volumereplication.htm#To_disable_boot_replication
So I actually want this ansible tasks to be verified and rectify for errors if any? Or a particular way in ansible to disable CRR in oracle cloud
Error message I'm getting at "Update boot_volume" task:
"msg": "Updating resource failed with exception: Parameters are invalid or incorrectly formatted. Update volume requires at least one parameter to update."}
No need to paas the availability_domain for disabling CRR. You can refer below ansible task for your reference.
- name: Disable CRR for boot_volume
oci_blockstorage_boot_volume:
boot_volume_id: "{{ boot_volume_id }}"
boot_volume_replicas: []
register: result

How to run plays using Ansible AWX on all hosts in a group using localhost

I'm trying to create a playbook that creates EC2 snapshots of some AWS Windows servers. I'm having a lot of trouble understanding how to get the correct directives in place to make sure things are running as they can. What I need to do is:
run the AWS commands locally, i.e. on the AWX host (as this is preferable to having to configure credentials on every server)
run the commands against each host in the inventory
I've done this in the past with a different group of Linux servers with no issue. But the fact that I'm having these issues makes me think that's not working as I think it is (I'm pretty new to all things Ansible/AWX).
The first step is I need to identify instances that are usually turned off and turn them on, then to take snapshots, then to turn them off again if they are usually turned off. So this is my main.yml:
---
- name: start the instance if the default state is stopped
import_playbook: start-instance.yml
when: default_state is defined and default_state == 'stopped'
- name: run the snapshot script
import_playbook: instance-snapshot.yml
- name: stop the instance if the default state is stopped
import_playbook: stop-instance.yml
when: default_state is defined and default_state == 'stopped'
And this is start-instance.yml
---
- name: make sure instances are running
hosts: all
gather_facts: false
connection: local
tasks:
- name: start the instances
ec2:
instance_id: "{{ instance_id }}"
region: "{{ aws_region }}"
state: running
wait: true
register: ec2
- name: pause for 120 seconds to allow the instance to start
pause: seconds=120
When I run this, I get the following error:
fatal: [myhost,mydomain.com]: UNREACHABLE! => {
"changed": false,
"msg": "ssl: HTTPSConnectionPool(host='myhost,mydomain.com', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f99045e6bd0>, 'Connection to myhost,mydomain.com timed out. (connect timeout=30)'))",
"unreachable": true
I've learnt enough to know that this means it is, indeed, trying to connect to each Windows host which is not what I want. However, I thought having connection: local resolved this as it seemed to with another template I have which uses and identical playbook.
So If I change start-instance.yml to instead say "connection: localhost", then the play runs - but skips the steps because it determines that no hosts meet the condition (i.e. default_state is defined and default_state == 'stopped'). This tells me that the play is running on using localhost to run the play, but is also running it against localhost instead of the list of hosts in the inventory in AWX.
My hosts have variables defined in AWX such as instance ID, region, default state. I know in normal Ansible use we would have the instance IDs in the playbook and that's how Ansible would know which AWS instances to start up, but in this case I need it to get this information from AWX.
Is there any way to do this? So, run the tasks (start the instances, pause for 120 seconds) against all hosts in my AWX inventory, using localhost?
In the end I used delegate_to for this. I changed the code to this:
---
- name: make sure instances are running
hosts: all
gather_facts: false
tasks:
- name: start the instances
delegate_to: localhost
ec2:
instance_id: "{{ instance_id }}"
region: "{{ aws_region }}"
state: running
wait: true
register: ec2
- name: pause for 120 seconds to allow the instance to start
pause: seconds=120
And AWX correctly used localhost to run the tasks I added delegation to.
Worth noting I got stuck for a while before realising my template needed two sets of credentials - one IAM user with correct permissions to run the AWS commands, and one for the machine credentials for the Windows instances.

resize type of ec2 with ansible

I want to resize a ec2 type from ansible.
This is my code:
- name: resize the instance
ec2:
aws_access_key: "{{ aws_access_key_var }}"
aws_secret_key: "{{ aws_secret_key_var }}"
region: "{{ region }}"
instance_ids:
- "{{ instance_id }}"
instance_type: "{\"Value\": \"t2.small\"}"
wait: True
register: ec2_result_file
But I get this error:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "image parameter is required for new instance"}
I try with command line all good
aws ec2 modify-instance-attribute --region reg --instance-id i-xx --instance-type "{\"Value\": \"t2.small\"}
Regards,
How to arrive at the solution:
Ansible tells you it wants to create "a new instance", but you already provided an existing instance ID.
Go to the docs for the ec2 module and check the argument in which you provided the ID of the current instance:
instance_ids list of instance ids, currently used for states: absent, running, stopped
Check what state you specified - you did not, so it's the default.
Check the same docs what is the default for state argument: it is present.
present is not listed in the instance_ids description, so the instance_ids is completely ignored.
Ansible thinks you really wanted to create a new instance.
Solution:
Add state: running to the ec2 module arguments.

How to change an existing AWS VPC resource with Ansible?

I have just created a vpc on AWS with ansible like so:
- name: Ensure VPC is present
ec2_vpc:
state: present
region: "{{ ec2_region }}"
cidr_block: "{{ ec2_subnet }}"
subnets:
- cidr: "{{ ec2_subnet }}"
route_tables:
- subnets:
- "{{ ec2_subnet }}"
routes:
- dest: 0.0.0.0/0
gw: igw
internet_gateway: yes # we want our instances to connect internet
wait: yes # wait for the VPC to be in state 'available' before returning
resource_tags: { Name: "my_project", environment: "production", tier: "DB" } # we tag this VPC so it will be easier for us to find it later
Note that this task is called from another playbook that takes care of filling the variables.
Also note that I am aware the ec2_vpc module is deprecated and I should update this piece of code soon.
But I think those points are not relevant to the question.
So you can notice in the resource_tags I have a name for that project. When I first ran this playbook, I did not have a name.
The VPC got created successfully the first time but then I wanted to add a name to it. So I tried adding it in the play, without knowing exactly how Ansible would know I want to update the existing VPC.
Indeed Ansible created a new VPC and did not update the existing one.
So the question is, how do I update the already existing one instead of creating a new resource?
After a conversation on the ansible irc (can't remember the usernames, sorry) it appears this specific module is not good at updating resources.
It looks like the answer lies in the new ec2_vpc_net module, specifically with the option multi_ok which by default won't duplicate VPCs with the same name and CIDR blocks.
docs: http://docs.ansible.com/ansible/ec2_vpc_net_module.html#ec2-vpc-net

Cannot access machine after creating trough ec2 module within same script

I have problems with my playbook which should create new EC2 instances trough built-in module and connect to them to set some default stuff.
I went trough lot of tutorials/posts, but none of them mentioned same problem, therefor i'm asking there.
Everything, in terms of creating goes well, but when i have instances created, and successfully waited for SSH to come up. I got error which says machine is unreachable.
UNREACHABLE! => {"changed": false, "msg": "ERROR! SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
I tried to connect manually (from terminal to the same host) and i was successful (while the playbook was waiting for connection). I also tried to increase timeout generally in ansible.cfg. I verified that given hostname is valid (and it is) and also tried public ip instead of public DNS, but nothing helps.
basically my playbook looks like that
---
- name: create ec2 instances
hosts: local
connection: local
gather_facts: False
vars:
machines:
- { type: "t2.micro", instance_tags: { Name: "machine1", group: "some_group" }, security_group: ["SSH"] }
tasks:
- name: lunch new ec2 instances
local_action: ec2
group={{ item.security_group }}
instance_type={{ item.type}}
image=...
wait=true
region=...
keypair=...
count=1
instance_tags=...
with_items: machines
register: ec2
- name: wait for SSH to come up
local_action: wait_for host={{ item.instances.0.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.results
- name: add host into launched group
add_host: name={{ item.instances.0.public_ip }} group=launched
with_items: ec2.results
- name: with the newly provisioned EC2 node configure basic stuff
hosts: launched
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- common
Note: in many tutorials are results from creating ec2 instances accessed in different way, but thats probably for different question.
Thanks
Solved:
I don't know how, but it suddenly started to work. No clue. In case i will find some new info, will update this question
A couple points that may help:
I'm guessing it's a version difference, but I've never seen a 'results' key in the registered 'ec2' variable. In any case, I usually use 'tagged_instances' -- this ensures that even if the play didn't create an instance (ie, because a matching instance already existed from a previous run-through), the variable will still return instance data you can use to add a new host to inventory.
Try adding 'search_regex: "OpenSSH"' to your 'wait_for' play to ensure that it's not trying to run before the SSH daemon is completely up.
The modified plays would look like this:
- name: wait for SSH to come up
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started search_regex="OpenSSH"
with_items: ec2.tagged_instances
- name: add host into launched group
add_host: name={{ item.public_ip }} group=launched
with_items: ec2.tagged_instances
You also, of course, want to make sure that Ansible knows to use the specified key when SSH'ing to the remote host either by adding 'ansible_ssh_private_key_file' to the inventory entry or specifying '--private-key=...' on the command line.

Resources