I'm using ansible to create 2 aws ec2 instances and update them.
The playbook succeeds to create the instances and check if ssh is available but
can't find the hosts using my tag.
Ansible playbook:
---
- name: "Create 2 webservers"
hosts: localhost
tasks:
- name: "Start instances with a public IP address"
ec2_instance:
name: "Ansible host"
key_name: "vockey"
instance_type: t2.micro
security_group: sg-017a67d545bce2047
image_id: "ami-0ed9277fb7eb570c9"
region: "us-east-1"
state: running
exact_count: 2
tags:
Role: Webservers
register: result
- name: "Wait for SSH to come up"
wait_for:
host: "{{ item.public_ip_address }}"
port: 22
sleep: 10
loop: "{{ result.instances }}"
- name: "Update the webServers"
hosts: tag_Role_Webservers
become: yes
tasks:
- name: "Upgrade all packages"
yum:
name: '*'
state: latest
The warning:
[WARNING]: Could not match supplied host pattern, ignoring: tag_Role_Webservers
PLAY [Update the webServers] ******************************************************************************************************************************************************************
skipping: no hosts matched
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Related
I am new to ansible. I am trying to get info from an esxi about the VM installed on it. No vcentre available.
I have created the playbook. Connection is working however I am getting the error in the Task debug. Any help is much appreciated. Thanks
Playbook:
- hosts: esxi
gather_facts: false
become: false
tasks:
- name: Gather all registered virtual machines
community.vmware.vmware_vm_info:
hostname: '{{ hostname }}'
username: '{{ ansible_user }}'
password: '{{ password }}'
validate_certs: no
delegate_to: localhost
register: vminfo
- debug:
msg: "{{ item.guest_name }}, {{ item.ip_address }}"
with_items:
- "{{ vminfo.virtual_machines }}"
The Error is:
TASK [debug] *************************************************************************************************************************
fatal: [192.168.233.202]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'item' is undefined\n\nThe error appears to be in '/home/sciclunam/task4/playbook11.yaml': line 14, column 8, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
PLAY RECAP ***************************************************************************************************************************
192.168.233.202 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The correct indentation is very necessary in Ansible.
Your with_items is not correctly indented.
Please notice the indentation of wiith_items
Here is my sample
# Create Directory Structure
- name: Create directory application
file:
path: "{{item}}"
state: directory
mode: 0755
with_items:
- "/opt/project1"
- "/opt/project1/script"
- "/opt/project1/application"
I've been wanting to try out Ansible modules available for Netbox [1].
However, I find myself stuck right in the beginning.
Here's what I've tried:
Add prefix/VLAN to netbox [2]:
cat setup-vlans.yml
---
- hosts: netbox
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
That gives me the following error:
ansible-playbook setup-vlans.yml
PLAY [netbox] *********************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [NETBOX]
TASK [Create prefix 192.168.10.0/24 in Netbox] ************************************************************************************************
fatal: [NETBOX]: FAILED! => {"changed": false, "msg": "Failed to establish connection to Netbox API"}
PLAY RECAP ************************************************************************************************************************************
NETBOX : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can someone please point me where I am going wrong?
Note: The NetBox URL is an https://url setup with nginx and netbox-docker [3].
Thanks & Regards,
Sana
[1] https://github.com/netbox-community/ansible_modules
[2] https://docs.ansible.com/ansible/latest/modules/netbox_prefix_module.html
[3]
https://github.com/netbox-community/netbox-docker
I had the same. Apparently the pynetbox api has changed in instantiation (ssl_verify is now replaced by requests session parameters).
I had to force ansible galaxy to update to the latest netbox module with:
ansible-galaxy collection install netbox.netbox -f
The force option did the trick for me.
All playbooks using API modules like netbox (but this is the same for gcp or aws) must use as host not the target but the host that will execute the playbook to call the API. Most of the time this is localhost, but that can be also a dedicated node like a bastion.
You can see in the example on the documentation you linked that it uses hosts: localhost.
Hence I think your playbook should be
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
I'm trying to write a role with tasks that filters out several EC2 instances, adds them to the inventory and then stops a PHP service on them.
This is how far I've gotten, the add_host idea I'm copying from here: http://docs.catalystcloud.io/tutorials/ansible-create-x-servers-using-in-memory-inventory.html
My service task does not appear to be running on the target instances but instead on the host specified in the playbook which runs this role.
---
- name: Collect ec2 data
connection: local
ec2_remote_facts:
region: "us-east-1"
filters:
"tag:Name": MY_TAG
register: ec2_info
- name: "Add the ec2 hosts to a group"
add_host:
name: "{{ item.id }}"
groups: foobar
ansible_user: root
with_items: "{{ ec2_info.instances }}"
- name: Stop the service
hosts: foobar
become: yes
gather_facts: false
service: name=yii-queue#1 state=stopped enabled=yes
UPDATE: when I try baptistemm's suggestion, I get this:
PLAY [MAIN_HOST_NAME] ***************************
TASK [ec2-manage : Collect ec2 data]
*******************************************
ok: [MAIN_HOST_NAME]
TASK [ec2-manage : Add the hosts to a new group]
*******************************************************
PLAY RECAP **********************************************************
MAIN_HOST_NAME : ok=1 changed=0 unreachable=0 failed=0
UPDATE #2 - yes, the ec2_remote_tags filter does return instances (using the real tag value not the fake one I put in this post). Also, I have seen ec2_instance_facts but I ran into some issues with that (boto3 required although I have a workaround there, still I'm trying to fix the current issue first).
If you want to play tasks on a group of targets you need to define a new play after you created the in-memory list of targets with add_host (as mentionned in your link). So to do that you need like this:
… # omit the code before
- name: "Add the ec2 hosts to a group"
add_host:
name: "{{ item.id }}"
groups: foobar
ansible_user: root
with_items: "{{ ec2_info.instances }}"
- hosts: foobar
gather_facts: false
become: yes
tasks:
- name: Stop the service
hosts: foobar
service: name=yii-queue#1 state=stopped enabled=yes
I am getting error while trying to retrieve the key values from consul kv store.
we have key values are stored under config/app-name/ folder. there are many keys. I want to retrieve all the key values from the consul using ansible.
But getting following error:
PLAY [Adding host to inventory] **********************************************************************************************************************************************************
TASK [Adding new host to inventory] ******************************************************************************************************************************************************
changed: [localhost]
PLAY [Testing consul kv] *****************************************************************************************************************************************************************
TASK [show the lookups] ******************************************************************************************************************************************************************
fatal: [server1]: FAILED! => {"failed": true, "msg": "{{lookup('consul_kv','config/app-name/')}}: An unhandled exception occurred while running the lookup plugin 'consul_kv'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Error locating 'config/app-name/' in kv store. Error was 500 No known Consul servers"}
PLAY RECAP *******************************************************************************************************************************************************************************
server1 : ok=0 changed=0 unreachable=0 failed=1
localhost : ok=1 changed=1 unreachable=0 failed=0
Here is the code i am trying.
---
- name: Adding host to inventory
hosts: localhost
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: Testing consul kv
hosts: "{{ target }}"
vars:
kv_info: "{{lookup('consul_kv','config/app-name/')}}"
become: yes
tasks:
- name: show the lookups
debug: msg="{{ kv_info }}"
but removing folder and adding folder are working well. but getting the key values from consul cluster is throwing error. please suggest some better way here.
- name: remove folder from the store
consul_kv:
key: 'config/app-name/'
value: 'removing'
recurse: true
state: absent
- name: add folder to the store
consul_kv:
key: 'config/app-name/'
value: 'adding'
I tried this but still the same error.
---
- name: Adding host to inventory
hosts: localhost
environment:
ANSIBLE_CONSUL_URL: "http://consul-1.abcaa.com"
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: show the lookups
debug: kv_info= "{{lookup('consul_kv','config/app-name/')}}"
All lookup plugins in Ansible are always evaluated on localhost, see docs:
Note:
Lookups occur on the local computer, not on the remote computer.
I guess you expect kv_info to be populated by executing consul fetch from
{{ target }} server.
But this lookup is actually executed on your Ansible control host (localhost), and if you have no ANSIBLE_CONSUL_URL set, you get No known Consul servers error.
When you use consul_kv module (to create/delete folders), it is executed on {{ target }} host in contrast to consul_kv lookup plugin.
So, I am a complete noob on Ansible and YAML and I'm trying to learn a bit but this is driving me crazy so far...I am using ansible tower. What I'm trying to do is to replace some text on the ntp.conf file of certain servers and update them with a new server. So my playbook looks like this:
---
- hosts: range_1
tasks:
- name: ntp change
become_user: ansible
blockinfile:
content: |
server Server1 iburst
server Server2 iburst
dest: /etc/ntp.conf
insertafter: "Please consider joining the pool"
marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
- name: restart ntp
service: name=ntpd state=restarted
But I am getting the
PLAY RECAP
Host_1 : ok=1 changed=0 unreachable=0 failed=0
Host_2 : ok=1 changed=0 unreachable=0 failed=0
Host_3 : ok=1 changed=0 unreachable=0 failed=0
Ansible is running and it does not exit with an error. However, is not making any chnages to the systems. (I assumed because of the changed = 0) I indeed logged on to those systems and no changes have been applied.
I have checked and the syntax is correct but Im not sure what I'm missing. I really need to understand how can I add two servers into the ntp.conf and if a server has some wrong info, to delete it and add just those 2 servers. Any help or guidance will be very much appreciated.
For anybody that might be reading, when running a playbook from ansible or ansible tower, set the verbose output to 2. When I ran it, it showed me the error:
AILED! => {"changed": false, "failed": true, "msg": "The destination
directory (/etc) is not writable by the current user."}
But it was fixed by adding the become: yes line on my playbook which it now looks like this:
---
- hosts: range_1
tasks:
- name: ntp change
become: yes
blockinfile:
content: |
server Server1 iburst
server Server2 iburst
dest: /etc/ntp.conf
insertafter: "Please consider joining the pool"
marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
- name: restart ntp
become: yes
service: name=ntpd state=restarted