Ansible 2.9, Linux Ubuntu 18.
I'm getting the following error with Ansible, when trying to change the status of a service with 'systemd'.
failed: [host.domain.com] (item=service_1) => {"ansible_loop_var": "item", "changed": false, "item": "service_1", "module_stderr": "Shared connection to host.domain.com closed.\r\n", "module_stdout": "\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
- name: Stop services
ansible.builtin.systemd:
name: "{{ serv }}"
state: stopped
with_servs:
- service_1
- service_2
- service_3
become: yes
The code above works fine with an account that has full sudo access (same as root privileges)
This will fail as shown above with an account having limited sudo access (sudo access to specific commands, such as /bin/systemctl * service_1*, /bin/systemctl * service_2*, /bin/systemctl * service_3*
Which sudo permissions are needed to run ansible.builtin.systemd? I'm trying to find out what command Ansible sends to the device to check if I gave the right permissions to the account, but no success on finding that yet (any hints?).
At first change your with_servs to loop: ist much easier to writes playbooks with loop. Set become as a globall var for playbook. Workaround for this can be exec. a coomand by module commands but it's not recomennded because, in every sitatuion it will be executing this commands even when service is stopped.
Related
I have a Gitlab pipeline that uses a gitlab runner to deploy from. From the runner, I run ansible to reach out and configure one of our servers.
In my pipeline step where I run ansible-playbook, I have the following setup:
deploy:
image: registry.com/ansible
stage: deploy
script:
- ansible-playbook server.yml --inventory=hosts.yml
This reaches out to my host and begins to deploy but hits a snag on the first task that has a "become: yes" statement in it. It fails providing the following error:
TASK [mytask : taskOne] ************
task path: my/file/location/path.yml
fatal: [server01[ : FAILED! => {
"changed": false,
"module_stderr": "/bin/sh: sudo: command not found\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
I can login to my server (server01) and run sudo without issues. Any thoughts on what could be causing this? Thanks.
My guess is that you are not using the same user in the GitLab pipeline and for your sudo test on the machine. In this case you should try to become this user on the host and try the sudo command to troubleshoot it. It seems to be a matter of PATH not something related to the configuration of the sudoers (that is a common problem).
As a workaround (it will not solve the sudo problem) you could try to use an alternate become_method like su, more detail in the doc.
- name: Run a command with become su
command: somecommand
become: yes
become_method: su
I have the following logic that I would like to implement with Ansible:
Before to update some operating system packages, I want to check some other remote dependencies, which involve querying some endpoints and decide if the next version is good or not.
The script new_version_available returns 0 if there is something new and 1 if there isn't something new.
To avoid install unnecessary packages in production, or open unnecessary ports in my firewall in the DMZ, I would like to run this script locally in my host and if it succeeds, then we run the next task remotely.
tasks:
- name: Check if there is new version available
command: "{{playbook_dir}}/new_version_available"
delegate_to: 127.0.0.1
register: new_version_available
ignore_errors: False
- name: Install our package
command:
cmd: '/usr/bin/our_installer update'
warn: False
when: new_version_available is succeeded
Which gives me the following error:
fatal: [localhost -> 127.0.0.1]: FAILED! => {"changed": false, "cmd": "/home/foo/ansible-deploy-bar/new_version_available", "msg": "[Errno 2] No such file or directory", "rc": 2}
That means that my command cannot be found, however my script exists and i have permission to access it.
My Development environment where I'm testing the playbook, is running in a virtual machine, via NAT, where forward the Guest port 22 to my host 2222, so if i want to login in my VM I do ssh root#localhost -p 2222. My inventory looks like:
foo:
hosts:
localhost:2222
My Question is:
What would be the Ansible way to achieve what I want, i.e run some command locally and pass the results to a register and use it as condition in a task? Run the command and pass the result as environment variable to Ansible?
I'm using this documentation as support https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html
I have a system that has a rotating token based sudo password that changes every 30 seconds or so. I'm using Ansible with privilege escalation. I know I can pass in the --ask-su-pass option - but the problem I've run into is at the time of asking the password is one thing - yet it changes by the time Ansible actually attempts to sudo.
Is there a way to get prompted at the time of sudo-ing?
Ansible Script
---
- name: Install & Setup Docker
hosts: ece
become: true
tasks:
- name: Install Required Packages
yum:
name:
- "yum-utils"
- "device-mapper-persistent-data"
- "lvm2"
state: latest
register: yum_result
- debug:
var: yum_result
verbosity: 2
What happens when I run:
PLAY [Install & Setup Docker]
TASK [Gathering Facts]
********************************************************************************************************************************************************* fatal: [xxxx]: FAILED! => {"changed": false, "module_stderr":
"Shared connection to xxxx closed.\r\n", "module_stdout":
"sudo: a password is required\r\n", "msg": "MODULE FAILURE\nSee
stdout/stderr for the exact error", "rc": 1}
to retry, use: --limit
So obviously I need a password for sudo. The problem is the sudo password is a rotating password so every 30 seconds or so it changes
I can use the --ask-su-pass flag to enter a sudo password - but by the time i get to the first time its used i get:
fatal: [xxxx]: FAILED! => {"msg": "Incorrect sudo password"}
because the password has since changed since I've entered it. Trying to figure out if i can get a prompt at the point where its trying to enter the sudo password on the remote system
I am trying to run a simple playbook against Openstack in admin tenant using Ansible Tower, both running on localhost. Here is the script:
--- #
- hosts: localhost
gather_facts: no
connection: local
tasks:
- name: Security Group
os_security_group:
state: present
name: example
I have done the following configuration:
Credentials:
Template:
Inventory test:
With this configuration, I am getting this error:
TASK [Security Group] **********************************************************
13:35:48
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
Any idea what can be? Looks like is a credential problem.
Untick Enable Privilege Escalation - it's not necessary. Your OpenStack privilege/authorisation will be tied to your OpenStack credentials (admin in this case), not the user running the Ansible task.
I've been having some trouble with restarting the SSH daemon with Ansible.
I'm using the latest software as of May 11 2015 (Ansible 1.9.1 / Vagrant 1.7.2 / VirtualBox 4.3.26 / Host: OS X 10.10.1 / Guest: ubuntu/trusty64)
tl;dr: There appears to be something wrong with the way I'm invoking the service syntax.
Problem With Original Use Case (Handler)
Playbook
- hosts: all
- remote_user: vagrant
- tasks:
...
- name: Forbid SSH root login
sudo: yes
lineinfile: dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="permitRootLogin no" state=present
notify:
- restart ssh
...
- handlers:
- name: restart ssh
sudo: yes
service: name=ssh state=restarted
Output
NOTIFIED: [restart ssh]
failed: [default] => {"failed": true}
FATAL: all hosts have already failed -- aborting
The nginx handler completed successfully with nearly identical syntax.
Task Also Fails
Playbook
- name: Restart SSH server
sudo: yes
service: name=ssh state=restarted
Same output as the handler use case.
Ad Hoc Command Also Fails
Shell
> ansible all -i ansible_inventory -u vagrant -k -m service -a "name=ssh state=restarted"
Inventory
127.0.0.1:8022
Output
127.0.0.1 | FAILED >> {
"failed": true,
"msg": ""
}
Shell command in box works
When I SSH in and run the usual command, everything works fine.
> vagrant ssh
> sudo service ssh restart
ssh stop/waiting
ssh start/running, process 7899
> echo $?
0
Command task also works
Output
TASK: [Restart SSH server] ****************************************************
changed: [default] => {"changed": true, "cmd": ["service", "ssh", "restart"], "delta": "0:00:00.060220", "end": "2015-05-11 07:59:25.310183", "rc": 0, "start": "2015-05-11 07:59:25.249963", "stderr": "", "stdout": "ssh stop/waiting\nssh start/running, process 8553", "warnings": ["Consider using service module rather than running service"]}
As we can see in the warning, we're supposed to use the service module, but I'm still not sure where the snag is.
As the comments above state, this is an Ansible issue that will apparently be fixed in the 2.0 release.
I just changed my handler to use the command module and moved on:
- name: restart sshd
command: service ssh restart