Attach boot disk if exist to Gcloud instance with Ansible - ansible

I'm creating instance in Google Cloud with Ansible, but when I want to attach existing disk to new compute engine, I can't attach it or add it to instance.
- name: Launch instances
gce:
instance_names: mongo
machine_type: "n1-standard-1"
image: "debian-9"
service_account_email: "xxxx#xxxx.iam.gserviceaccount.com"
credentials_file: "gcp-credentials.json"
project_id: "learning"
disk_size: 10
disk_auto_delete: false
preemptible: true
tags: "mongo-server"
register: gce
- name: Wait for SSH for instances
wait_for:
delay: 1
host: "{{ item.public_ip }}"
port: 22
state: started
timeout: 30
with_items: "{{ gce.instance_data }}"
The error I have is:
The error was: libcloud.common.google.ResourceExistsError: {'domain': 'global', 'message': "The resource 'projects/xxx-xxx/zones/us-central1-a/disks/mongo' already exists", 'reason': 'alreadyExists'}
There are any form to configure this option with Ansible? To do that now I'm using external scripts.

Existing disks can be provided as a list under 'disks' attribute, first entry needs to be Boot dik
https://docs.ansible.com/ansible/2.6/modules/gce_module.html
- gce:
instance_names: my-test-instance
zone: us-central1-a
machine_type: n1-standard-1
state: present
metadata: '{"db":"postgres", "group":"qa", "id":500}'
tags:
- http-server
- my-other-tag
disks:
- name: disk-2
mode: READ_WRITE
- name: disk-3
mode: READ_ONLY

Related

Launch instance in AWS and add in host file in ansible (not dynaminc inventory)

I have a lot of inventories inside ansible. I not looking for a dynamic inventory solution.
When I create an instance from ansible in AWS, I need to run some tasks inside that new server but I don’t know what is the best way to do it.
tasks:
- ec2:
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-west-2
key_name: xxxxxxxxxx
instance_type: t2.medium
image: ami-xxxxxxxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: subnet-xxxxxxxxxxxx
assign_public_ip: no
instance_tags:
Name: new-instances
count: 1
- name: Buscamos la IP de la instancia creada
ec2_instance_facts:
filters:
"tag:Name": new-instances
register: ec2_instance_info
- set_fact:
msg: "{{ ec2_instance_info | json_query('instances[*].private_ip_address') }} "
- debug: var=msg
I currently show the IP of the new instance but I need to create a new host file and insert the IP of the new instance alias as I need to run several tasks after creating it.
Any help doing this?

Specifying a subnetwork for Ansible google.cloud.compute_instance

I have tried every combination I can conceive of to specify to deploy a google compute instance into a particular subnet (subnetX) in network (networkY).
- name: create a instance
google.cloud.gcp_compute_instance:
name: test_object
machine_type: n1-standard-1
disks:
- auto_delete: 'true'
boot: 'true'
source: "{{ disk }}"
- auto_delete: 'true'
interface: NVME
type: SCRATCH
initialize_params:
disk_type: local-ssd
labels:
environment: production
network_interfaces: # <<< does not work. API request is made without a network_interface
- network:
selfLink: "https://blah/blah/blah/networkY"
subnetwork:
selfLink: "https://blah/blah/blah/subnetworkX"
zone: us-central1-a
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
state: present
When using subnetwork you should not specify a network.
To be more precise this is the workaround for this problem.
---
- name: create a network
gcp_compute_network:
name: ansible-network
auto_create_subnetworks: yes
project: "{{ lookup('env','GCP_PROJECT') }}"
state: present
register: network
- name: Get Network URL
set_fact:
network_url: "{{ network | json_query(jmesquery) }}"
vars:
jmesquery: "{selfLink: selfLink}"
- name: create a firewall
gcp_compute_firewall:
name: ansible-firewall
network: "{{ network_url }}"
allowed:
- ip_protocol: tcp
ports: ['80','22']
target_tags:
- apache-http-server
source_ranges: ['0.0.0.0/0']
project: "{{ lookup('env','GCP_PROJECT') }}"
state: present
register: firewall

Make a line in an Ansible playbook optional

I have built a playbook to build a virtual server in F5. I want to make a line only execute if someone enters the variable. In this case the default_persistence_profile: line has a variable "{{ persistenceProfile }}". Sometimes the developers don't want persistence applied to their app but sometimes they do. I have found when I make the variable optional in the run task and don't select a persistence profile the task errors out. See playbook below:
- name: Build the Virtual Server
bigip_virtual_server:
state: present
partition: Common
name: "{{ vsName }}"
destination: "{{ vsIpAddress }}"
port: "{{ vsPort }}"
pool: "{{ poolName }}"
default_persistence_profile: "{{ persistenceProfile }}"
ip_protocol: tcp
snat: automap
description: "{{ vsDescription }}"
profiles:
- tcp
- http
- name: "{{ clientsslName }}"
context: client-side
- name: default-server-ssl
context: server-side
Ansible has a mechanism for omitting parameters using the default filter, like this:
- name: Build the Virtual Server
bigip_virtual_server:
state: present
partition: Common
name: "{{ vsName }}"
destination: "{{ vsIpAddress }}"
port: "{{ vsPort }}"
pool: "{{ poolName }}"
default_persistence_profile: "{{ persistenceProfile|default(omit) }}"
ip_protocol: tcp
snat: automap
description: "{{ vsDescription }}"
profiles:
- tcp
- http
- name: "{{ clientsslName }}"
context: client-side
- name: default-server-ssl
context: server-side
If persistenceProfile is unset, the default_persistence_profile parameter should not be passed to the bigip_virtual_server module.

How set ansible function to when modeule?

I want install one of Filebit prospectors only if service redis is running on the host. For that i create default list of prospectors ( redis, aerospike, postgress ) with {{ item.id }}. But right now i want put some expression to "when:" which will install prospector for redis only if it running - how can i do it ?
- name: Configure Filebeat prospectors
template: src=filebeat_conf.yml.j2 dest=/etc/filebeat/conf.d/{{ item.id }}.yml
notify: restart filebeat
with_items: prospectors
when: { " service: " }
Split your task in two: collect facts, act depending on facts.
In your case, you can list installed services first, then configure only existing ones.
For example:
- hosts: test-host
vars:
services:
- name: apache2
id: 1
- name: nginx
id: 2
- name: openvpn
id: 3
tasks:
- name: get list of services
shell: "service --status-all 2>&1 | awk {'print $4'}"
args:
warn: false
register: services_list
- name: process only existing services
debug: msg="service {{ item.name }} with id={{ item.id }} exists"
with_items: "{{ services }}"
when: item.name in services_list.stdout_lines

Connect EC2 instance to Target Group using Ansible

I have been working for a while registering EC2 instances to ELB's using Ansible. But now I'm starting to use ALB's and I need to connect my instances to Target Groups which in turn are connected to the ALB. Is there an Ansible plugin that allows me to register an instance to a AWS Target Group?
Since Ansible does not support registration of instances to target groups I had to use the AWS-CLI tool. With the following command you can register an instance to a target group:
aws elbv2 register-targets --target-group-arn arn:aws:elasticloadbalancing:us-east-1:your-target-group --targets Id=i-your-instance
So I just call this command from Ansible and it's done.
Use elb_target:
name: Gather facts for all new proxy instances
ec2_instance_facts:
filters:
"tag:Name": "{{ ec2_tag_proxy }}"
register: ec2_proxy
elb_target_group:
name: uat-target-proxy
protocol: http
port: 80
vpc_id: vpc-4e6e8112
deregistration_delay_timeout: 60
stickiness_enabled: True
stickiness_lb_cookie_duration: 86400
health_check_path: /
successful_response_codes: "200"
health_check_interval: "20"
state: present
elb_target:
target_group_name: uat-target-proxy
target_id: "{{ item.instance_id }}"
target_port: 80
state: present
with_items: "{{ ec2_proxy.instances }}"
when: ec2_proxy.instances|length > 0
Try the following configurations:
- name: creating target group
local_action:
module: elb_target_group
region: us-east-2
vpc_id: yourvpcid
name: create-targetgrp-delete
protocol: https
port: 443
health_check_path: /
successful_response_codes: "200,250-260"
state: present
targets:
- Id: ec2isntanceid
Port: 443
wait_timeout: 200
register: tgp

Resources