write a task in main.yml to stop and start service in service "ssh" using service module in ansible.
---
- hosts: localhost
become: true
become_method: sudo
tasks:
- name: stop service
service:
name: ssh
state: stopped
- name: start service
service:
name: ssh
state: started
when run it's giving below error
[WARNING]: Unable to parse /projects/challenge/localhost as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
ERROR! unexpected parameter type in action: <class 'ansible.parsing.yaml.objects.AnsibleSequence'>
The error appears to be in '/projects/challenge/fresco_module/tasks/main.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be
---
- hosts: localhost
^ here
Firstly, you should be able to do SSH to localhost.
You can try,
ssh user#localhost date
You can create a hosts file and name it hosts and add the following content to it.
[localhost]
localhost
[localhost:vars]
ansible_ssh_user=user
ansible_ssh_pass=pass
ansible_sudo_pass=sudopass
And run the playbook as
ansible-playbook -i hosts main.yml
Using command module was able to stop and start the service, using sudo service ssh stop and sudo service ssh start served my purpose.
was not able to do so with service module, still don't know about that
Resolved at my end by using complete path for .yml file
ansible-playbook -i /etc/ansible/hosts myfirstplaybook.yml
Related
In my job there is a playbook developed in the following way that is executed by ansible tower.
This is the file that ansible tower executes and calls a playbook
report.yaml:
- hosts: localhost
gather_facts: false
connection: local
tasks:
- name: "Execute"
include_role:
name: 'fusion'
main.yaml from fusion role:
- name: "hc fusion"
include_tasks: "hc_fusion.yaml"
hc_fusion.yaml from fusion role:
- name: "FUSION"
shell: ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars 'fusion_ip_ha={{item.ip}} fusion_user={{item.username}} fusion_pass={{item.password}} fecha="{{fecha.stdout}}" fusion_ansible_become_user={{item.ansible_become_user}} fusion_ansible_become_pass={{item.ansible_become_pass}}'
fusion.yaml from fusion role:
- hosts: localhost
vars:
ansible_become_user: "{{fusion_ansible_become_user}}"
ansible_become_pass: "{{fusion_ansible_become_pass}}"
tasks:
- name: Validate
ignore_unreachable: yes
shell: service had status
delegate_to: "{{fusion_user}}#{{fusion_ip_ha}}"
become: True
become_method: su
This is a summary of the entire run.
Previously it worked but throws the following error.
stdout: PLAY [localhost] \nTASK [Validate] [1;31mfatal: [localhost -> gandalf#10.66.173.14]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to connect to the host via ssh: Warning: Permanently added '10.66.173.14' (RSA) to the list of known hosts.\ngandalf#10.66.173.14: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password), \"skip_reason\": \"Host localhost is unreachable\"
When I execute ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars XXXXXXXX from the command line with user awx it works.
Also I validated the connection from the server where ansible tower is running to where you want to connect with the ssh command and if it allows me to connect without requesting a password with the user awx
fusion.yaml does not explicitly specify connection plugin, thus default ssh type is being used. For localhost this approach usually brings a number of related problems (ssh keys, known_hosts, loopback interfaces etc.). If you need to run tasks on localhost you should define connection plugin local just like in your report.yaml playbook.
Additionally, as Zeitounator mentioned, running one ansible playbook from another with shell model is a really bad practice. Please, avoid this. Ansible has a number of mechanism for code re-use (includes, imports, roles etc.).
How can I run a local command on a Ansible control server, if that control server does not have a SSH daemon running?
If I run the following playbook:
- name: Test commands
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Test local action
local_action: command echo "hello world"
I get the following error:
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host localhost port 22: Connection refused", "unreachable": true}
It seems that local_action is the same as delegate_to: 127.0.0.1, so Ansible tries to ssh to the localhost. However, there is no SSH daemon running on the local controller host (only on the remote machines).
So my immediate question is how to run a specific command from Ansible, without Ansible first trying to SSH to localhost.
Crucial addition, not in the original question:
My host_vars contained the following line:
ansible_connection: ssh
how to run a specific command from Ansible, without Ansible first trying to SSH to localhost.
connection: local is sufficient to make the tasks run in the controller without using SSH.
Try,
- name: Test commands
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Test local action
command: echo "hello world"
I'll answer the details myself, perhaps it is useful to someone:
In my case:
ansible_connection was set to ssh in the host_vars.
ansible_host was set to localhost by local_action.
This combined let to a ssh to localhost that failed.
Further considerations:
delegate_to, local_action set ansible_host and ansible_connection, but any setting in the host_vars, playbook or task override that.
connection: local only sets ansible_connection (ansible_host is unmodified), but any setting of ansible_connection in the host_vars, playbook or task overrides it.
So my solution was to either remove the ansible_connection in the host_vars, or setting the var ansible_connection in a task.
That looks wrong for me.
- name: import profiles of VMs
connection: local
hosts: localhost
gather_facts: false
tasks:
- name: list files
find:
paths: .
recurse: no
delegate_to: localhost
He is still asking me for ssh password:
❯ ansible-playbook playbooks/import_vm_profiles.yml -i localhost, -k [WARNING]: Unable to parse the plugin filter file /Users/fredericclement/devops/ansible_refactored/etc/Plugin_filters.yml as module_blacklist is not a list. Skipping.
SSH password:
I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it.
I get this warning no matter what changes I make to the inventory (hosts) file.
$ ansible-playbook 2.7.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
SSH password:
**/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected
Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin
PLAYBOOK: addpool.yaml *********************************************************
1 plays in addpool.yaml
[WARNING]: **Could not match supplied host pattern, ignoring: bigip**
PLAY [Sample pool playbook] ****************************************************
17:05:43
skipping: no hosts matched
I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file.
Here's my hosts file:
192.168.68.253
192.168.68.254
192.168.1.165
[centos]
dad2 ansible_ssh_host=192.168.1.165
[bigip]
bigip1 ansible_host=192.168.68.254
bigip2 ansible_host=192.168.68.253
Here's my playbook:
---
- name: Sample pool playbook
hosts: bigip
connection: local
tasks:
- name: create web servers pool
bigip_pool:
name: web-servers2
lb_method: ratio-member
password: admin
user: admin
server: '{{inventory_hostname}}'
validate_certs: no
I replaced hosts: bigip with hosts: all and specified the inventory in Tower as bigip which contains only the two hosts I want to change. This seems to provide the output I am looking for.
For the ansible-playbook command line, I added --limit bigip and this seems to provide the output I am looking for.
So things appear to be working, I just don't know whether this is best practice use.
If you get the error below while running a playbook with the command
ansible-playbook -i test-project/inventory.txt playbook.yml
{"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 172.31.24.10 port 22: Connection timed out", "unreachable": true}
The solution is to add, in the file ansible.cfg:
[defaults]
inventory=/etc/ansible/hosts
I think you need to remove the connection: local.
You have specified in hosts: bigip that you want these tasks to only run on hosts in the bigip group. You then specify connection: local which causes the task to run on the controller node (i.e. localhost), rather than the nodes in the bigip group. Localhost is not a member of the bigip group, and so none of the tasks in the play will trigger.
Check for special characters in absolute path of hosts file or playbook. Incase if you directly copied the path from putty, try copy and paste it from notepad or any editor
For me the issue was the format of the /etc/ansible/hosts file. You should use the :children suffix in order to use groups of groups like this:
[dev1]
dev_1 ansible_ssh_host=192.168.1.55 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[dev2]
dev_2 ansible_ssh_host=192.168.1.68 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[devs:children]
dev1
dev2
Reference: here
I'm trying to develop an Ansible script to generate a VM. I wrote a myvm role that contains the script that orchestrates vmware_guest. This script contains a delegate_to: localhost which vmware_guest requires.
Then, I added my new-to-be-vm to hosts, and added the following to hosts:
[myvms]
myvm1
and extended site.yml with:
- hosts: myvms
roles:
- myvm
Now, when I run:
ansible-playbook site.yml -i hosts --limit myvm1
it fails with:
fatal: [myvm1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Connection reset by 192.168.10.13 port 22\r\n", "unreachable": true}
It seems ansible tries to connect to the vm ip before reading the actual role that creates the vm where it delegates to localhost. Adding 'delegate_to' to site.yml fails, however.
How can I fix my Ansible scripts to properly generate the VM for me?
Add gather_facts: false to the play.
- hosts: myvms
gather_facts: false
roles:
- myvm
Ansible by default connects to target machines and runs script which collect data (facts).
I'm running a simple ansible playbook and getting an error:
ERROR: parse error: playbooks must be formatted as a YAML list, got type 'str'
---
- hosts: all
tasks:
- name: Get server availability by pinging it
ping:
- name: Get server hostname
command: hostname
Not sure where the problem is. Ansible v1.9.6
Answer from comment: missing -i flag in ansible-playbook hostname.yml inventory.