Specifying a subnetwork for Ansible google.cloud.compute_instance - ansible

I have tried every combination I can conceive of to specify to deploy a google compute instance into a particular subnet (subnetX) in network (networkY).
- name: create a instance
google.cloud.gcp_compute_instance:
name: test_object
machine_type: n1-standard-1
disks:
- auto_delete: 'true'
boot: 'true'
source: "{{ disk }}"
- auto_delete: 'true'
interface: NVME
type: SCRATCH
initialize_params:
disk_type: local-ssd
labels:
environment: production
network_interfaces: # <<< does not work. API request is made without a network_interface
- network:
selfLink: "https://blah/blah/blah/networkY"
subnetwork:
selfLink: "https://blah/blah/blah/subnetworkX"
zone: us-central1-a
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
state: present

When using subnetwork you should not specify a network.

To be more precise this is the workaround for this problem.
---
- name: create a network
gcp_compute_network:
name: ansible-network
auto_create_subnetworks: yes
project: "{{ lookup('env','GCP_PROJECT') }}"
state: present
register: network
- name: Get Network URL
set_fact:
network_url: "{{ network | json_query(jmesquery) }}"
vars:
jmesquery: "{selfLink: selfLink}"
- name: create a firewall
gcp_compute_firewall:
name: ansible-firewall
network: "{{ network_url }}"
allowed:
- ip_protocol: tcp
ports: ['80','22']
target_tags:
- apache-http-server
source_ranges: ['0.0.0.0/0']
project: "{{ lookup('env','GCP_PROJECT') }}"
state: present
register: firewall

Related

Ansible how to remove groups value by key

I am having a play where i will collect available host names before running a task, i am using this for a purpose,
My play code:
--
- name: check reachable side A hosts
hosts: ????ha???
connection: local
gather_facts: no
roles:
- Juniper.junos
vars:
credentials:
host: "{{ loopback_v4 }}"
username: "test"
ssh_keyfile: "/id_rsa"
port: "{{ port }}"
timeout: 60
tasks:
- block:
- name: "Check netconf connectivity with switches"
juniper_junos_ping:
provider: "{{ credentials }}"
dest: "{{ loopback_v4 }}"
- name: Add devices with connectivity to the "reachable" group
group_by:
key: "reachable_other_pairs"
rescue:
- debug: msg="Cannot ping to {{inventory_hostname}}. Skipping OS Install"
When i print this using
- debug:
msg: "group: {{ groups['reachable_other_pairs'] }}"
i am getting below result
"this group : ['testha1', 'testha2', 'testha3']",
Now if again call the same play with different hosts grouping with the same key i am getting the new host names appending to the existing values, like below
- name: check reachable side B hosts
hosts: ????hb???
connection: local
gather_facts: no
roles:
- Juniper.junos
vars:
credentials:
host: "{{ loopback_v4 }}"
username: "test"
ssh_keyfile: "/id_rsa"
port: "{{ port }}"
timeout: 60
tasks:
- block:
- name: "Check netconf connectivity with switches"
juniper_junos_ping:
provider: "{{ credentials }}"
dest: "{{ loopback_v4 }}"
- name: Add devices with connectivity to the "reachable" group
group_by:
key: "reachable_other_pairs"
rescue:
- debug: msg="Cannot ping to {{inventory_hostname}}. Skipping OS Install"
if i print the reachable_other_pairs i am getting below results
"msg": " new group: ['testhb1', 'testhb2', 'testhb3', 'testha1', 'testha2', 'testha3']"
All i want is only first 3 entries ['testhb1', 'testhb2', 'testhb3']
Can some one let me know how to achieve this?
Add this as as task just before your block. It will refresh your inventory and clean up all groups that are not in there:
- meta: refresh_inventory

Make a line in an Ansible playbook optional

I have built a playbook to build a virtual server in F5. I want to make a line only execute if someone enters the variable. In this case the default_persistence_profile: line has a variable "{{ persistenceProfile }}". Sometimes the developers don't want persistence applied to their app but sometimes they do. I have found when I make the variable optional in the run task and don't select a persistence profile the task errors out. See playbook below:
- name: Build the Virtual Server
bigip_virtual_server:
state: present
partition: Common
name: "{{ vsName }}"
destination: "{{ vsIpAddress }}"
port: "{{ vsPort }}"
pool: "{{ poolName }}"
default_persistence_profile: "{{ persistenceProfile }}"
ip_protocol: tcp
snat: automap
description: "{{ vsDescription }}"
profiles:
- tcp
- http
- name: "{{ clientsslName }}"
context: client-side
- name: default-server-ssl
context: server-side
Ansible has a mechanism for omitting parameters using the default filter, like this:
- name: Build the Virtual Server
bigip_virtual_server:
state: present
partition: Common
name: "{{ vsName }}"
destination: "{{ vsIpAddress }}"
port: "{{ vsPort }}"
pool: "{{ poolName }}"
default_persistence_profile: "{{ persistenceProfile|default(omit) }}"
ip_protocol: tcp
snat: automap
description: "{{ vsDescription }}"
profiles:
- tcp
- http
- name: "{{ clientsslName }}"
context: client-side
- name: default-server-ssl
context: server-side
If persistenceProfile is unset, the default_persistence_profile parameter should not be passed to the bigip_virtual_server module.

"Unsupported parameters for (ecs_taskdefinition) module: cpu,launch_type,memory Supported parameters include:

getting "Unsupported parameters for (ecs_taskdefinition) module: cpu,launch_type,memory Supported parameters include: arn,aws_access_key,aws_secret_key,containers,ec2_url,family,network_mode,profile,region,revision,security_token,state,task_role_arn,validate_certs,volumes"}
ecs_taskdefinition:
family: "{{ taskfamily_name }}"
# task_role_arn: "{{ ecs_role.arn }}"
# execution_role_arn: "{{ ecs_role.arn }}"
containers:
- name: "{{ container_name }}"
essential: true
image: "{{ image_var }}"
portMappings:
- containerPort: "{{ container_port }}"
hostPort: "{{ container_port }}"
environment:
- name: xyz_ENV
value: "{{ env }}"
launch_type: FARGATE
cpu: "{{ cpu_var }}"
memory: "{{ memory_var }}"
state: present
network_mode: awsvpc ```
cpu,launch_type,memory
Those parameters were added in ansible 2.7
You have to upgrade your ansible version or drop those params.
Ref: https://docs.ansible.com/ansible/latest/modules/ecs_taskdefinition_module.html

Ansible Variable in lookup syntax

I am currently using a k8s lookup to search for resources with a certain tag attached to them (in this case branch). This branch is a variable that changes regularly. The problem is that I can't seem to find the correct syntax for adding a variable into the lookup since it is itself using the Jinja syntax.
This works:
- name: delete the replicaset
k8s:
state: absent
api_version: v1
kind: ReplicaSet
namespace: default
name: "{{ replicaset.metadata.name }}"
kubeconfig: /var/lib/awx/.kube/config
vars:
replicaset: "{{ lookup('k8s', kind='ReplicaSet', namespace='default', label_selector='branch=testing' ) }}"
However, when trying to use the branch variable, nothing I try seems to work. Here is one example of not working:
- name: delete the replicaset
k8s:
state: absent
api_version: v1
kind: ReplicaSet
namespace: default
name: "{{ replicaset.metadata.name }}"
kubeconfig: /var/lib/awx/.kube/config
vars:
replicaset: "{{ lookup('k8s', kind='ReplicaSet', namespace='default', label_selector='branch={{ branch }}' ) }}"
You can either add a helper variable:
- name: delete the replicaset
k8s:
state: absent
api_version: v1
kind: ReplicaSet
namespace: default
name: "{{ replicaset.metadata.name }}"
kubeconfig: /var/lib/awx/.kube/config
vars:
replicaset: "{{ lookup('k8s', kind='ReplicaSet', namespace='default', label_selector=my_selector ) }}"
my_selector: branch={{ branch }}
or use a Jinja2 string concatenation:
replicaset: "{{ lookup('k8s', kind='ReplicaSet', namespace='default', label_selector='branch='+branch ) }}"

Attach boot disk if exist to Gcloud instance with Ansible

I'm creating instance in Google Cloud with Ansible, but when I want to attach existing disk to new compute engine, I can't attach it or add it to instance.
- name: Launch instances
gce:
instance_names: mongo
machine_type: "n1-standard-1"
image: "debian-9"
service_account_email: "xxxx#xxxx.iam.gserviceaccount.com"
credentials_file: "gcp-credentials.json"
project_id: "learning"
disk_size: 10
disk_auto_delete: false
preemptible: true
tags: "mongo-server"
register: gce
- name: Wait for SSH for instances
wait_for:
delay: 1
host: "{{ item.public_ip }}"
port: 22
state: started
timeout: 30
with_items: "{{ gce.instance_data }}"
The error I have is:
The error was: libcloud.common.google.ResourceExistsError: {'domain': 'global', 'message': "The resource 'projects/xxx-xxx/zones/us-central1-a/disks/mongo' already exists", 'reason': 'alreadyExists'}
There are any form to configure this option with Ansible? To do that now I'm using external scripts.
Existing disks can be provided as a list under 'disks' attribute, first entry needs to be Boot dik
https://docs.ansible.com/ansible/2.6/modules/gce_module.html
- gce:
instance_names: my-test-instance
zone: us-central1-a
machine_type: n1-standard-1
state: present
metadata: '{"db":"postgres", "group":"qa", "id":500}'
tags:
- http-server
- my-other-tag
disks:
- name: disk-2
mode: READ_WRITE
- name: disk-3
mode: READ_ONLY

Resources