Get AWS RDS Aurora cluster endpoint using Ansible - ansible

I need to be able to get the cluster endpoint for an existing AWS RDS Aurora cluster using Ansible by providing the "DB identifier" of the cluster.
When using community.aws.rds_instance_info in my playbook and referencing the DB instance identifier of the writer instance:
---
- name: Test
hosts: localhost
connection: local
tasks:
- name: Get RDS Aurora cluster
community.aws.rds_instance_info:
db_instance_identifier: "test-cluster-1" # the writer instance of the aurora db cluster
register: rds_aurora_cluster
It returns that instance as expected.
But if I use the cluster endpoint (test-cluster) it does not return any instances, or any cluster-level information:
ok: [localhost] => {
"changed": false,
"instances": [],
"invocation": {
"module_args": {
"aws_access_key": "<omitted>",
"aws_ca_bundle": null,
"aws_config": null,
"aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"db_instance_identifier": "test-cluster",
"debug_botocore_endpoint_logs": false,
"ec2_url": null,
"filters": null,
"profile": null,
"region": "us-east-1",
"security_token": null,
"validate_certs": true
}
}
}
I've also tried using the amazon.aws.aws_rds module in the amazon.aws.rds collection, which has an include_clusters parameter:
---
- name: Test
hosts: localhost
connection: local
vars:
collections:
- amazon.aws
tasks:
- name: Get RDS Aurora cluster
aws_rds:
db_instance_identifier: "test-cluster"
include_clusters: true
register: rds_aurora_cluster
When I run that playbook I get:
ERROR! couldn't resolve module/action 'aws_rds'. This often indicates a misspelling, missing collection, or incorrect module path.
The error appears to be in '/Users/username/Desktop/test/test.yml': line 23, column 7, but may be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Get RDS Aurora cluster
^ here
I've confirmed that the latest version of the collection is installed:
❯ ansible-galaxy collection list
# /usr/local/Cellar/ansible/4.4.0/libexec/lib/python3.9/site-packages/ansible_collections
Collection Version
----------------------------- -------
amazon.aws 2.0.0
And I have verified the package:
❯ ansible-galaxy collection verify amazon.aws
Downloading https://galaxy.ansible.com/download/amazon-aws-2.0.0.tar.gz to /Users/username/.ansible/tmp/ansible-local-8367rrxw073b/tmpejakna6g/amazon-aws-2.0.0-_y4d1bqj
Verifying 'amazon.aws:2.0.0'.
Installed collection found at '/usr/local/Cellar/ansible/4.4.0/libexec/lib/python3.9/site-packages/ansible_collections/amazon/aws'
MANIFEST.json hash: 1286503f7bcc6dd26aecf9bec4d055e8f0d2e355522f97b620522a5aa754cb9e
Successfully verified that checksums for 'amazon.aws:2.0.0' match the remote collection.

I've also tried using the amazon.aws.aws_rds module in the amazon.aws.rds collection, which has an include_clusters parameter:
One will observe from the documentation you linked to that the aws_rds is an inventory plugin and not a module; it's unfortunate that they have a copy-paste error at the top alleging that one can use it in a playbook, but the examples section shows the correct usage by putting that yaml in a file named WHATEVER.aws_rds.yaml and then confirming the selection by running ansible-inventory -i ./WHATEVER.aws_rds.yaml --list
Based solely upon some use of grep -r, it seems that inventory plugin or command: aws rds describe-db-clusters ... are the only two provided mechanisms that are aurora-aware
Working example:
test.aws_rds.yml inventory file:
plugin: aws_rds
regions:
- us-east-1
include_clusters: true
test.yml playbook, executed with ansible-playbook test.yml -i ./test.aws_rds.yml:
---
- name: Test
hosts: localhost
connection: local
tasks:
- name: test
ansible.builtin.shell:
cmd: echo {{ hostvars['test-cluster'].endpoint }}

Related

Ansible URI not working in Gitlab CI pipeline in first run

I am facing currently a strange issue and I am not able to guess what causes it.
I wrote small Ansible scripts to test if Kafka schema-registry and connectors are running by calling their APIs.
I could run those Ansible playbooks on my local machine successfully. However, when running them in Gitlab CI pipeline (im using the same local machine as gitlab runner), the connect_test always breaks with the following error:
fatal: [xx.xxx.x.x]: FAILED! => {"changed": false, "elapsed": 0, "msg": "Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://localhost:8083"}
The strange thing, that this failed job will work when I click on retry button in CI pipeline.
Has anyone an idea about this issue? I would appreciate your help.
schema_test.yml
---
- name: Test schema-registry
hosts: SCHEMA
become: yes
become_user: root
tasks:
- name: list schemas
uri:
url: http://localhost:8081/subjects
register: schema
- debug:
msg: "{{ schema }}"
connect_test.yml
---
- name: Test connect
hosts: CONNECT
become: yes
become_user: root
tasks:
- name: check connect
uri:
url: http://localhost:8083
register: connect
- debug:
msg: "{{ connect }}"
.gitlab-ci.yml
test-connect:
stage: test
script:
- ansible-playbook connect_test.yml
tags:
- gitlab-runner
test-schema:
stage: test
script:
- ansible-playbook schema_test.yml
tags:
- gitlab-runner
update
I replaced URI module with shell. as a result, I see the same behaviour. The initial run of the pipeline will fail and retrying the job will fix it
maybe you are restarting the services in a previous job, take in mind that kafka connect needs generally more time to be available after the restart. Try to pause ansible after you restart the service for a minute or so.

azure_rm_dnszones cannot find resourcegroup

Im trying to create a public DNS zones in Azure, using the ansible_rm_dnszones module, and it just doesn't work. Im running ansible-playbook 2.9.3 on Ubuntu 18.04.3 LTS. After doing AZ LOGIN successfully and running this playbook. The RG poc-rg_publicdns i created manually.
This playbook needs to use proxy-servers. i've setup proxies as environment variables using export
export http_proxy=http://<ip>:3128
export https_proxy=http://<ip>:3128
Playbook code:
---
- hosts: localhost
gather_facts: no
tasks:
- name: Creating zone kasperstesting123.dk
azure_rm_dnszone:
resource_group: poc-rg_publicdns
name: kasperstesting123.dk
type: public
im getting
TASK [Creating zone kasperstesting123.dk] ****************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error retrieving resource group poc-rg_publicdns - Resource group 'poc-rg_publicdns' could not be found."}
Alright.. it turns out having multiple subscription, my default active subscription was not the one i tried to create the objects on. To switch subscriptions, use the az account set tool https://learn.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az-account-set
az account set -s <subid>

SSH-less LXC containers using Ansible

I am new to ansible, and I am trying to use ansible on some lxc containers.
My problem is that I don't want to install ssh on my containers. So
What I tried:
I tried to use this connection plugin but it seams that it does not work with ansible 2.
After understanding that chifflier connection plugin doesn't work, I tried to use the connection plugin from openstack.
After some failed attempts I dived into the code, and I understand
that the plugin doesn't have the information that the host I am talking with is a container.(because the code never reached this point)
My current setup:
{Ansbile host}---|ssh|---{vm}--|ansible connection plugin|---{container1}
My ansible.cfg:
[defaults]
connection_plugins = /home/jkarr/ansible-test/connection_plugins/ssh
inventory = inventory
My inventory:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm container_name=mailserver
my group vars:
ansible_host: "{{ physical_hostname }}"
ansible_ssh_extra_args: "{{ container_name }}"
ansible_user: containeruser
container_name: "{{ inventory_hostname }}"
physical_hostname: "{{ hostvars[physical_host]['ansible_host'] }}"
My testing playbook:
- name: Test Playbook
hosts: containers
gather_facts: true
tasks:
- name: testfile
copy:
content: "Test"
dest: /tmp/test
The output is:
fatal: [mailserver]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mailserver: No address associated with hostname\r\n",
"unreachable": true
}
Ansible version is: 2.3.1.0
So what am I doing wrong? any tips?
Thanks in advance!
Update 1:
Based on eric answer I am now using this connection plug-in.
I update the my inventory and it looks like:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm ansible_connection=lxc
After running my playbook I took:
<192.168.28.12> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "192.168.28.12 is not running"
}
Which is weird because 192.168.28.12 is the vm and the container is called mailserver. Also I verified that the container is running.
Also why it says that 192.168.28.12 is local lxc dir?
Update 2:
I remove my group_vars, my ansible.cfg and the connection plugin from the playbook and I got the this error:
<mailserver> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "mailserver is not running"
}
You should take a look at this lxc connection plugin. It might fit your needs.
Edit : lxc connection plugin is actually part of Ansible.
Just add ansible_connection=lxc in your inventory or group vars.
I'm trying something similar.
I want to configure a host over ssh using ansible and run lxc containers on the host, which are also configured using ansible:
ansible control node ----> host-a -----------> container-a
ssh lxc-attach
The issue with the lxc connection module is, that it only works for local lxc containers. There is no way to get it working through ssh.
At the moment the only way seems to be a direct ssh connection or a ssh connection through the first host:
ssh
ansible control node ----> container-a
or
ssh ssh
ansible control node ----> host-a ----> container-a
Both require sshd installed in the container. But the second way doesn't need port forwarding or multiple ip addresses.
Did you get a working solution?

Ansible etcd lookup plugin issue

I've etcd running on the Ansible control machine (local). I can get and put the values as shown below but Ansible wouldn't get values, any thoughts?
I can also get the value using curl
I got this simple playbook
#!/usr/bin/env ansible-playbook
---
- name: simple ansible playbook ping
hosts: all
gather_facts: false
tasks:
- name: look up value in etcd
debug: msg="{{ lookup('etcd', 'weather') }}"
And running this playbook wouldn't fetch values from etcd
TASK: [look up value in etcd] *************************************************
ok: [app1.test.com] => {
"msg": ""
}
ok: [app2.test.com] => {
"msg": ""
}
Currently (31.05.2016) Ansible etcd lookup plugin support only calls to v1 API and not compatible with newer etcd instances that publish v2 API endpoint.
Here is the issue.
You can use my quickly patched etcd2.py lookup plugin.
Place it into lookup_plugins subdirectory near you playbook (or into Ansible global lookup_plugins path).
Use lookup('etcd2', 'weather') in your playbook.

How to use ansible_ssh_port in playbook task

My Situation is my database server is not opened with default ssh port 22, so I am trying to execute a query over port 3838.Below is the code -
tasks:
- name: passive | Get MasterDB IP
mysql_replication: mode=getslave
register: slaveInfo
- name: Active | Get Variable Details
mysql_variables: variable=hostname ansible_ssh_port=3838
delegate_to: "{{ slaveInfo.Master_Host }}"
register: activeDbHostname
Ansible version is :- 1.7.2
TASK: [Active | Get Variable Details] *****************************************
<192.168.0.110> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 192.168.0.110
fatal: [example1.com -> 192.168.0.110] => {'msg': 'FAILED: [Errno 101] Network is unreachable', 'failed': True}
FATAL: all hosts have already failed -- aborting
It connecting over default port 22 rather connecting on 3838 port. Please share your thoughts, If I am going wrong somewhere ...
You can specify the value for ansible_ssh_port in a number of places. But you will likely want to use a dynamic inventory script.
eg. From hosts file:
[db-slave]
10.0.0.20 ansible_ssh_port=3838
eg. as a variable in host_vars:
---
# host_vars/10.0.0.20/default.yml
ansible_ssh_port: 3838
eg. in a dynamic inventory! you may use a combo of group_vars and tagging the instances:
---
# group_vars/db-slaves/default.yml
ansible_ssh_port: 3838
use gce.py, ec2.py or some other dynamic inventory script and group your intances in the hosts file:
[tag_db-slaves]
; this is automatically filled by ec2.py or gce.py
[db-slaves:children]
tag_db-slaves
Of course this will mean you need to tag the instances when you fire them up. You can find several dynamic inventory scripts in the ansible repository.
If your mysqld is running on a docker instance in the same host, I would recommend you create a custom dynamic inventory with some form of service discovery, such as using consul, etcd, zookeeper, or some custom solution using a key-value store such as redis. You can find an introduction to dynamic inventories in the ansible documentation.

Resources