So, I am a complete noob on Ansible and YAML and I'm trying to learn a bit but this is driving me crazy so far...I am using ansible tower. What I'm trying to do is to replace some text on the ntp.conf file of certain servers and update them with a new server. So my playbook looks like this:
---
- hosts: range_1
tasks:
- name: ntp change
become_user: ansible
blockinfile:
content: |
server Server1 iburst
server Server2 iburst
dest: /etc/ntp.conf
insertafter: "Please consider joining the pool"
marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
- name: restart ntp
service: name=ntpd state=restarted
But I am getting the
PLAY RECAP
Host_1 : ok=1 changed=0 unreachable=0 failed=0
Host_2 : ok=1 changed=0 unreachable=0 failed=0
Host_3 : ok=1 changed=0 unreachable=0 failed=0
Ansible is running and it does not exit with an error. However, is not making any chnages to the systems. (I assumed because of the changed = 0) I indeed logged on to those systems and no changes have been applied.
I have checked and the syntax is correct but Im not sure what I'm missing. I really need to understand how can I add two servers into the ntp.conf and if a server has some wrong info, to delete it and add just those 2 servers. Any help or guidance will be very much appreciated.
For anybody that might be reading, when running a playbook from ansible or ansible tower, set the verbose output to 2. When I ran it, it showed me the error:
AILED! => {"changed": false, "failed": true, "msg": "The destination
directory (/etc) is not writable by the current user."}
But it was fixed by adding the become: yes line on my playbook which it now looks like this:
---
- hosts: range_1
tasks:
- name: ntp change
become: yes
blockinfile:
content: |
server Server1 iburst
server Server2 iburst
dest: /etc/ntp.conf
insertafter: "Please consider joining the pool"
marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
- name: restart ntp
become: yes
service: name=ntpd state=restarted
Related
I'm using ansible to create 2 aws ec2 instances and update them.
The playbook succeeds to create the instances and check if ssh is available but
can't find the hosts using my tag.
Ansible playbook:
---
- name: "Create 2 webservers"
hosts: localhost
tasks:
- name: "Start instances with a public IP address"
ec2_instance:
name: "Ansible host"
key_name: "vockey"
instance_type: t2.micro
security_group: sg-017a67d545bce2047
image_id: "ami-0ed9277fb7eb570c9"
region: "us-east-1"
state: running
exact_count: 2
tags:
Role: Webservers
register: result
- name: "Wait for SSH to come up"
wait_for:
host: "{{ item.public_ip_address }}"
port: 22
sleep: 10
loop: "{{ result.instances }}"
- name: "Update the webServers"
hosts: tag_Role_Webservers
become: yes
tasks:
- name: "Upgrade all packages"
yum:
name: '*'
state: latest
The warning:
[WARNING]: Could not match supplied host pattern, ignoring: tag_Role_Webservers
PLAY [Update the webServers] ******************************************************************************************************************************************************************
skipping: no hosts matched
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I am developing a collection containing a number of playbooks for complex system configurations. However, I'd like to simplify the usage of them to a single module plugin, named 'install'. In the Python code of the module will be decided what playbook will be called to perform the actual task. Say
- name: Some playbook written by the end user
hosts: all
tasks:
...
- install:
name: docker
version: latest
state: present
...
Based on the specified name, version and state will the Python code of the install module invoke programmatically the adequate playbook(s) to perform the installation of the latest release of Docker. These playbooks are also included into the collection.
If task lists are a better fit than playbooks, then task lists it is.
In fact, I don't mind the precise implementation. For as long as:
Everything is packed into a collection
The end user does it all with one 'install' task as depicted above
Is that even possible ?
Yes. It's possible. For example, given the variable in a file
shell> cat foo_bar_install.yml
foo_bar_install:
name: docker
version: latest
state: present
Create the playbooks in the collection foo.bar
shell> tree collections/ansible_collections/foo/bar/
collections/ansible_collections/foo/bar/
├── galaxy.yml
└── playbooks
├── install_docker.yml
└── install.yml
1 directory, 3 files
shell> cat collections/ansible_collections/foo/bar/playbooks/install_docker.yml
- name: Install docker
hosts: all
gather_facts: false
tasks:
- debug:
msg: |
Playbook foo.bar.install_docker.yml
version: {{ version|d('UNDEF') }}
state: {{ state|d('UNDEF') }}
shell> cat collections/ansible_collections/foo/bar/playbooks/install.yml
- import_playbook: "foo.bar.install_{{ foo_bar_install.name }}.yml"
vars:
version: "{{ foo_bar_install.version }}"
state: "{{ foo_bar_install.state }}"
Given the inventory
shell> cat hosts
host1
host2
Run the playbook foo.bar.install.yml and provide the extra variables in the file foo_bar_install.yml
shell> ansible-playbook foo.bar.install.yml -e #foo_bar_install.yml
PLAY [Install docker] ************************************************************************
TASK [debug] *********************************************************************************
ok: [host1] =>
msg: |-
Playbook foo.bar.install_docker.yml
version: latest
state: present
ok: [host2] =>
msg: |-
Playbook foo.bar.install_docker.yml
version: latest
state: present
PLAY RECAP ***********************************************************************************
host1: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I've been wanting to try out Ansible modules available for Netbox [1].
However, I find myself stuck right in the beginning.
Here's what I've tried:
Add prefix/VLAN to netbox [2]:
cat setup-vlans.yml
---
- hosts: netbox
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
That gives me the following error:
ansible-playbook setup-vlans.yml
PLAY [netbox] *********************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [NETBOX]
TASK [Create prefix 192.168.10.0/24 in Netbox] ************************************************************************************************
fatal: [NETBOX]: FAILED! => {"changed": false, "msg": "Failed to establish connection to Netbox API"}
PLAY RECAP ************************************************************************************************************************************
NETBOX : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can someone please point me where I am going wrong?
Note: The NetBox URL is an https://url setup with nginx and netbox-docker [3].
Thanks & Regards,
Sana
[1] https://github.com/netbox-community/ansible_modules
[2] https://docs.ansible.com/ansible/latest/modules/netbox_prefix_module.html
[3]
https://github.com/netbox-community/netbox-docker
I had the same. Apparently the pynetbox api has changed in instantiation (ssl_verify is now replaced by requests session parameters).
I had to force ansible galaxy to update to the latest netbox module with:
ansible-galaxy collection install netbox.netbox -f
The force option did the trick for me.
All playbooks using API modules like netbox (but this is the same for gcp or aws) must use as host not the target but the host that will execute the playbook to call the API. Most of the time this is localhost, but that can be also a dedicated node like a bastion.
You can see in the example on the documentation you linked that it uses hosts: localhost.
Hence I think your playbook should be
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
I'm trying to run an ansible playbook against multiple hosts that are running containers using the same name. There are 3 hosts each running a container called "web". I'm trying to use the docker connection.
I'm using the typical pattern of a hosts file which works fine for running ansible modules on the host.
- name: Ping
ping:
- name: Add web container to inventory
add_host:
name: web
ansible_connection: docker
ansible_docker_extra_args: "-H=tcp://{{ ansible_host }}:2375"
ansible_user: root
changed_when: false
- name: Remove old logging directory
delegate_to: web
file:
state: absent
path: /var/log/old_logs
It only works against the first host in the hosts file
PLAY [all]
TASK [Gathering Facts]
ok: [web1]
ok: [web2]
ok: [web3]
TASK [web-playbook : Ping]
ok: [web1]
ok: [web2]
ok: [web3]
TASK [web-playbook : Add sensor container to inventory]
ok: [web1]
PLAY RECAP
web1 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web2 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web3 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I've tried setting name to web_{{ ansible_host }} to make it unique between hosts but it then tries to connect to web_web1. I've been running the commands using sudo docker exec web rm -rf /var/log/old_logs which of course works, but I'd like to be able to use the ansible modules directly in the docker containers.
The result you get is absolutely expected. Quoting the add_host documentation
This module bypasses the play host loop and only runs once for all the hosts in the play, if you need it to iterate use a with-loop construct.
i.e. you cannot rely on the hosts loop for the add_host and need to make a loop yourself.
Moreover, you definitely need to have different names (i.e. inventory_hostname) for your dynamically created hosts but since all your docker containers have the same name, their ansible_host should be the same.
Assuming all your docker host machines are in the group dockerhosts, the following playbook should do the job. I'm currently not in a situation where I can test this myself so you may have to adjust a bit. Let me know if it helped you and if I have to edit my answer.
Note that even though the add_host task will not naturally loop, I kept the hosts on your original group in the first play so that facts are gathered correctly and correctly populated in the hostvars magic variable
---
- name: Create dynamic inventory for docker containers
hosts: dockerhosts
tasks:
- name: Add web container to inventory
add_host:
name: "web_{{ item }}"
groups:
- dockercontainers
ansible_connection: docker
ansible_host: web
ansible_docker_extra_args: "-H=tcp://{{ hostvars[item].ansible_host }}:2375"
ansible_user: root
loop: "{{ groups['dockerhosts'] }}"
- name: Play needed commands on containers
hosts: dockercontainers
tasks:
- name: Remove old logging directory
file:
state: absent
path: /var/log/old_logs
I am trying to add pool members to a bigip pool using bigip_pool_member.
Tested on ansible version 2.5 and 2.6
Result - Returns changed ALWAYS, even when it is not making any changes.
Involcation command:
ansible-playbook -i test_inventory add_pool_members.yaml --extra-vars '{"hostgroup": "test-bigip"}'
I am wondering if anyone has insights into what could be going on ?
The contents of the playbook are as under
--
- hosts: "{{ hostgroup }}"
gather_facts: no"
tasks:
- name: Add servers to connection pool
bigip_pool_member:
user: username
password: password
server: "{{inventory_hostname}}"
validate_certs: no
state: present
partition: test
pool: testpool
host: 14.34.45.X
name: test-server
port: 80
description: test
delegate_to: localhost
Run Result
PLAY [f5-test] *****************************************************************************
TASK [Add servers to connection pool ] *****************************************************
changed: [f5-test -> localhost]
PLAY RECAP *********************************************************************************
f5-test : ok=1 changed=1 unreachable=0 failed=0
This could be related to this known bug in the module.
When running playbook with bigip_pool_member module with state: present against live device, each run results in change being made when in reality there's no need for a change.
I'm nor f5 neither network expert but from I understand that happen if you set a monitor to your pool.
There is a pull request already with fixes related to correct state of down machine. Check if it applies to you, else I would suggest to add a detailed comment on the bug.