Im trying to create a public DNS zones in Azure, using the ansible_rm_dnszones module, and it just doesn't work. Im running ansible-playbook 2.9.3 on Ubuntu 18.04.3 LTS. After doing AZ LOGIN successfully and running this playbook. The RG poc-rg_publicdns i created manually.
This playbook needs to use proxy-servers. i've setup proxies as environment variables using export
export http_proxy=http://<ip>:3128
export https_proxy=http://<ip>:3128
Playbook code:
---
- hosts: localhost
gather_facts: no
tasks:
- name: Creating zone kasperstesting123.dk
azure_rm_dnszone:
resource_group: poc-rg_publicdns
name: kasperstesting123.dk
type: public
im getting
TASK [Creating zone kasperstesting123.dk] ****************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error retrieving resource group poc-rg_publicdns - Resource group 'poc-rg_publicdns' could not be found."}
Alright.. it turns out having multiple subscription, my default active subscription was not the one i tried to create the objects on. To switch subscriptions, use the az account set tool https://learn.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az-account-set
az account set -s <subid>
Related
I need to be able to get the cluster endpoint for an existing AWS RDS Aurora cluster using Ansible by providing the "DB identifier" of the cluster.
When using community.aws.rds_instance_info in my playbook and referencing the DB instance identifier of the writer instance:
---
- name: Test
hosts: localhost
connection: local
tasks:
- name: Get RDS Aurora cluster
community.aws.rds_instance_info:
db_instance_identifier: "test-cluster-1" # the writer instance of the aurora db cluster
register: rds_aurora_cluster
It returns that instance as expected.
But if I use the cluster endpoint (test-cluster) it does not return any instances, or any cluster-level information:
ok: [localhost] => {
"changed": false,
"instances": [],
"invocation": {
"module_args": {
"aws_access_key": "<omitted>",
"aws_ca_bundle": null,
"aws_config": null,
"aws_secret_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"db_instance_identifier": "test-cluster",
"debug_botocore_endpoint_logs": false,
"ec2_url": null,
"filters": null,
"profile": null,
"region": "us-east-1",
"security_token": null,
"validate_certs": true
}
}
}
I've also tried using the amazon.aws.aws_rds module in the amazon.aws.rds collection, which has an include_clusters parameter:
---
- name: Test
hosts: localhost
connection: local
vars:
collections:
- amazon.aws
tasks:
- name: Get RDS Aurora cluster
aws_rds:
db_instance_identifier: "test-cluster"
include_clusters: true
register: rds_aurora_cluster
When I run that playbook I get:
ERROR! couldn't resolve module/action 'aws_rds'. This often indicates a misspelling, missing collection, or incorrect module path.
The error appears to be in '/Users/username/Desktop/test/test.yml': line 23, column 7, but may be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Get RDS Aurora cluster
^ here
I've confirmed that the latest version of the collection is installed:
❯ ansible-galaxy collection list
# /usr/local/Cellar/ansible/4.4.0/libexec/lib/python3.9/site-packages/ansible_collections
Collection Version
----------------------------- -------
amazon.aws 2.0.0
And I have verified the package:
❯ ansible-galaxy collection verify amazon.aws
Downloading https://galaxy.ansible.com/download/amazon-aws-2.0.0.tar.gz to /Users/username/.ansible/tmp/ansible-local-8367rrxw073b/tmpejakna6g/amazon-aws-2.0.0-_y4d1bqj
Verifying 'amazon.aws:2.0.0'.
Installed collection found at '/usr/local/Cellar/ansible/4.4.0/libexec/lib/python3.9/site-packages/ansible_collections/amazon/aws'
MANIFEST.json hash: 1286503f7bcc6dd26aecf9bec4d055e8f0d2e355522f97b620522a5aa754cb9e
Successfully verified that checksums for 'amazon.aws:2.0.0' match the remote collection.
I've also tried using the amazon.aws.aws_rds module in the amazon.aws.rds collection, which has an include_clusters parameter:
One will observe from the documentation you linked to that the aws_rds is an inventory plugin and not a module; it's unfortunate that they have a copy-paste error at the top alleging that one can use it in a playbook, but the examples section shows the correct usage by putting that yaml in a file named WHATEVER.aws_rds.yaml and then confirming the selection by running ansible-inventory -i ./WHATEVER.aws_rds.yaml --list
Based solely upon some use of grep -r, it seems that inventory plugin or command: aws rds describe-db-clusters ... are the only two provided mechanisms that are aurora-aware
Working example:
test.aws_rds.yml inventory file:
plugin: aws_rds
regions:
- us-east-1
include_clusters: true
test.yml playbook, executed with ansible-playbook test.yml -i ./test.aws_rds.yml:
---
- name: Test
hosts: localhost
connection: local
tasks:
- name: test
ansible.builtin.shell:
cmd: echo {{ hostvars['test-cluster'].endpoint }}
I would like to use an ansible playbook to setup identical configurations for two different users on my localhost (i.e., admin & brian). Currently, the "common" role installs programs that are accessible by both users. In addition, I have settings that are user specific (e.g., desktop wallpaper). When I run my playbook, the user specific settings are updated from one user but not the other. For example, if I run my playbook the wallpaper for brian is changed but the wallpaper for admin is left untouched. I am aware of become_user, but do not want to that for every task that I run. Is it possible to define the hosts file or playbook in such a way that I can simply specify the users on localhost I want the playbook to run against?
I have tried
Is there anyway to run multiple Ansible playbooks as multiple users more efficiently? on the per role level but am getting the following error:
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/usr/bin/python2: can't open file '/home/brian/.ansible/tmp/ansible-tmp-1525409723.54-208533437554058/apt.py': [Errno 13] Permission denied\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 2}
site.yml
---
- name: ansible master playbook
hosts: localhost
connection: local
roles:
- role: common
roles/common/tasks/main.yml
---
- import_tasks: gsettings.yml
roles/common/tasks/gsettings.yml
---
- name: Use 12 hr. clock format
dconf:
key: "/org/gnome/desktop/interface/clock-format"
value: "'12h'"
In Ansible you have the option to launch the playbook as:
ansible-playbook playbooks/playbook.yml --user user
Please note that specifying a user can sometime conflict with a user defined in /etc/ansible/hosts.
(From Ansible documentation)
My solution was to simply log into each user on my local machine and run my ansible playbooks locally. An underlying issue with using the dconf module to change gsettings appears to be that D-Bus for the other user is not set, so the gsettings for the other user do not stick. See related questions below.
https://askubuntu.com/questions/655238/as-root-i-can-use-su-to-make-dconf-changes-for-another-user-how-do-i-actually
http://docs.ansible.com/ansible/latest/modules/dconf_module.html
Access another user's D-Bus session
I'm trying to develop an Ansible script to generate a VM. I wrote a myvm role that contains the script that orchestrates vmware_guest. This script contains a delegate_to: localhost which vmware_guest requires.
Then, I added my new-to-be-vm to hosts, and added the following to hosts:
[myvms]
myvm1
and extended site.yml with:
- hosts: myvms
roles:
- myvm
Now, when I run:
ansible-playbook site.yml -i hosts --limit myvm1
it fails with:
fatal: [myvm1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Connection reset by 192.168.10.13 port 22\r\n", "unreachable": true}
It seems ansible tries to connect to the vm ip before reading the actual role that creates the vm where it delegates to localhost. Adding 'delegate_to' to site.yml fails, however.
How can I fix my Ansible scripts to properly generate the VM for me?
Add gather_facts: false to the play.
- hosts: myvms
gather_facts: false
roles:
- myvm
Ansible by default connects to target machines and runs script which collect data (facts).
I am new to ansible, and I am trying to use ansible on some lxc containers.
My problem is that I don't want to install ssh on my containers. So
What I tried:
I tried to use this connection plugin but it seams that it does not work with ansible 2.
After understanding that chifflier connection plugin doesn't work, I tried to use the connection plugin from openstack.
After some failed attempts I dived into the code, and I understand
that the plugin doesn't have the information that the host I am talking with is a container.(because the code never reached this point)
My current setup:
{Ansbile host}---|ssh|---{vm}--|ansible connection plugin|---{container1}
My ansible.cfg:
[defaults]
connection_plugins = /home/jkarr/ansible-test/connection_plugins/ssh
inventory = inventory
My inventory:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm container_name=mailserver
my group vars:
ansible_host: "{{ physical_hostname }}"
ansible_ssh_extra_args: "{{ container_name }}"
ansible_user: containeruser
container_name: "{{ inventory_hostname }}"
physical_hostname: "{{ hostvars[physical_host]['ansible_host'] }}"
My testing playbook:
- name: Test Playbook
hosts: containers
gather_facts: true
tasks:
- name: testfile
copy:
content: "Test"
dest: /tmp/test
The output is:
fatal: [mailserver]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mailserver: No address associated with hostname\r\n",
"unreachable": true
}
Ansible version is: 2.3.1.0
So what am I doing wrong? any tips?
Thanks in advance!
Update 1:
Based on eric answer I am now using this connection plug-in.
I update the my inventory and it looks like:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm ansible_connection=lxc
After running my playbook I took:
<192.168.28.12> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "192.168.28.12 is not running"
}
Which is weird because 192.168.28.12 is the vm and the container is called mailserver. Also I verified that the container is running.
Also why it says that 192.168.28.12 is local lxc dir?
Update 2:
I remove my group_vars, my ansible.cfg and the connection plugin from the playbook and I got the this error:
<mailserver> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "mailserver is not running"
}
You should take a look at this lxc connection plugin. It might fit your needs.
Edit : lxc connection plugin is actually part of Ansible.
Just add ansible_connection=lxc in your inventory or group vars.
I'm trying something similar.
I want to configure a host over ssh using ansible and run lxc containers on the host, which are also configured using ansible:
ansible control node ----> host-a -----------> container-a
ssh lxc-attach
The issue with the lxc connection module is, that it only works for local lxc containers. There is no way to get it working through ssh.
At the moment the only way seems to be a direct ssh connection or a ssh connection through the first host:
ssh
ansible control node ----> container-a
or
ssh ssh
ansible control node ----> host-a ----> container-a
Both require sshd installed in the container. But the second way doesn't need port forwarding or multiple ip addresses.
Did you get a working solution?
I'm very confused on how you are supposed to launch EC2 instances using Ansible.
I'm trying to use the ec2.py inventory scripts. I'm not sure which one is supposed to be used, because there is three installed with Ansible:
ansible/lib/ansible/module_utils/ec2.py
ansible/lib/ansible/modules/core/cloud/amazon/ec2.py
ansible/plugins/inventory/ec2.py
I thought running the one in inventory/ would make most sense, so I run it using:
ansible-playbook launch-ec2.yaml -i ec2.py
which gives me:
msg: Either region or ec2_url must be specified
So I add a region (even though I have a vpc_subnet_id specified) and I get:
msg: Region us-east-1e does not seem to be available for aws module boto.ec2. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path
I'm thinking Amazon must have recently changed ec2 so you need to use a VPC? Even when I try and launch an instance from Amazon's console, the option for "EC2 Classic" is disabled.
When I try and use the ec2.py script in cloud/amazon/ I get:
ERROR: Inventory script (/software/ansible/lib/ansible/modules/core/cloud/amazon/ec2.py) had an execution error:
There are no more details than this.
After some searching, I see that ec2.py module in /module_utils has been changed so a region doesn't need to be specified. I try to run this file but get:
ERROR: The file /software/ansible/lib/ansible/module_utils/ec2.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this with chmod -x /software/ansible/lib/ansible/module_utils/ec2.py.
So as the error suggests, I remove the executable permissions for the ec2.py file, but then get the following error:
ERROR: /software/ansible/lib/ansible/module_utils/ec2.py:30: Invalid ini entry: distutils.version - need more than 1 value to unpack
Does anyone have any ideas on how to get this working? What is the correct file to be using? I'm completely lost at this point on how to get this working.
There are several questions in your post. I'll try to summarise them in three items:
Is it still possible to launch instances in EC2 Classic (no VPC)?
How do I create a new EC2 instance using Ansible?
How to launch the dynamic inventory file ec2.py?
1. EC2 Classic
Your options will differ depending on when did you create your AWS account, the type of instance and the AMI virtualisation type used. Refs: aws account,instance type.
If none of the above parameters restricts the usage of EC2 classic you should be able to create a new instance without defining any VPC.
2. Create a new EC2 instance with Ansible
Since your instance doesn't exist yet a dynamic inventory file (ec2.py) is useless. Try to instruct ansible to run on your local machine instead.
Create a new inventory file, e.g. new_hosts with the following contents:
[localhost]
127.0.0.1
Then your playbook, e.g. create_instance.yml should use a local connection and hosts: localhost. See an example below:
--- # Create ec2 instance playbook
- hosts: localhost
connection: local
gather_facts: false
vars_prompt:
inst_name: "What's the name of the instance?"
vars:
keypair: "your_keypair"
instance_type: "m1.small"
image: "ami-xxxyyyy"
group: "your_group"
region: "us-west-2"
tasks:
- name: make one instance
ec2: image={{ image }}
instance_type={{ instance_type }}
keypair={{ keypair }}
instance_tags='{"Name":"{{ inst_name }}"}'
region={{ region }}
group={{ group }}
wait=true
register: ec2_info
- name: Add instances to host group
add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2_info.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2_info.instances
This play will create an EC2 instance and it will register its public IP as an ansible host variable ec2hosts ie. as if you had defined it in the inventory file. This is useful if you want to provision the instance just created, just add a new play with hosts: ec2hosts.
Ultimately, launch ansible as follows:
export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=<your aws access key here>
export AWS_SECRET_KEY=<your aws secret key here>
ansible-playbook -i new_hosts create_instance.yml
The purpose of the environment variable ANSIBLE_HOST_KEY_CHECKING=false is to avoid being prompted to add the ssh host key when connecting to the instance.
Note: boto needs to be installed on the machine that runs the above ansible command.
3. Use ansible's ec2 dynamic inventory
EC2 dynamic inventory is comprised of 2 files, ec2.py and ec2.ini. In your particular case, I believe that your issue is due to the fact that ec2.py is unable to locate ec2.ini file.
To solve your issue, copy ec2.py and ec2.ini to the same folder in the machine where you intend to run ansible, e.g. to /etc/ansible/.
Pre Ansible 2.0 release (change the branch accordingly).
cd /etc/ansible
wget https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible/ansible/stabe-1.9/plugins/inventory/ec2.ini
chmod u+x ec2.py
For Ansible 2:
cd /etc/ansible
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini
chmod u+x ec2.py
Configure ec2.ini and run ec2.py, which should print an ini formatted list of hosts to stdout.