Run playbook against Openstack with Ansible Tower - ansible

I am trying to run a simple playbook against Openstack in admin tenant using Ansible Tower, both running on localhost. Here is the script:
--- #
- hosts: localhost
gather_facts: no
connection: local
tasks:
- name: Security Group
os_security_group:
state: present
name: example
I have done the following configuration:
Credentials:
Template:
Inventory test:
With this configuration, I am getting this error:
TASK [Security Group] **********************************************************
13:35:48
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
Any idea what can be? Looks like is a credential problem.

Untick Enable Privilege Escalation - it's not necessary. Your OpenStack privilege/authorisation will be tied to your OpenStack credentials (admin in this case), not the user running the Ansible task.

Related

Ansible SSH key mismatch

I wrote the following Ansible playbook:
---
- name: Create VLAN
hosts: exos_device
connection: ansible.netcommon.network_cli
vars:
ansible_user: admin
ansible_password: password
ansible_network_os: community.network.exos
tasks:
- name: Create VLAN 4050
community.network.exos_config:
lines:
- create vlan TESTVLAN tag 4050
match: exact
save_when: always
Where I'm trying to create a new vlan on an Extreme Networks machine (ExtremeXOS version 16.2.5.4). But when I execute it I keep getting the following error:
fatal: [10.12.2.10]: FAILED! => {
"changed": false,
"module_stderr": "ssh connection failed: ssh connect failed: kex error : no match for method server host key algo: server [ssh-rsa], client [rsa-sha2-512,rsa-sha2-256,ssh-ed25519,ecdsa-sha2-nistp521,ecdsa-sha2-nistp384,ecdsa-sha2-nistp256]",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
I think this error indicates that there is a mismatch between the SSH key algorithms that the client (Ansible controller) and the server (the EXOS machine) support.
What is the best way to resolve this issue?
I've already tried specifying an algorithm inside the ansible.cfg file like this:
[defaults]
inventory = inventory.ini
ssh_args = -oKexAlgorithms=ssh-rsa
But with no success.

Ansible: systemd fails. Which sudo permissions are needed?

Ansible 2.9, Linux Ubuntu 18.
I'm getting the following error with Ansible, when trying to change the status of a service with 'systemd'.
failed: [host.domain.com] (item=service_1) => {"ansible_loop_var": "item", "changed": false, "item": "service_1", "module_stderr": "Shared connection to host.domain.com closed.\r\n", "module_stdout": "\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
- name: Stop services
ansible.builtin.systemd:
name: "{{ serv }}"
state: stopped
with_servs:
- service_1
- service_2
- service_3
become: yes
The code above works fine with an account that has full sudo access (same as root privileges)
This will fail as shown above with an account having limited sudo access (sudo access to specific commands, such as /bin/systemctl * service_1*, /bin/systemctl * service_2*, /bin/systemctl * service_3*
Which sudo permissions are needed to run ansible.builtin.systemd? I'm trying to find out what command Ansible sends to the device to check if I gave the right permissions to the account, but no success on finding that yet (any hints?).
At first change your with_servs to loop: ist much easier to writes playbooks with loop. Set become as a globall var for playbook. Workaround for this can be exec. a coomand by module commands but it's not recomennded because, in every sitatuion it will be executing this commands even when service is stopped.

Run playbook as a different user

I have added ssh keys of ansible user to other hosts so ansible user is allowed on all hosts.Now I want to run playbook as root or any other service users like apache etc. i have already indicated user as ansible in my playbook i got below mentioned errors when I run playbook while logged in as root. But everything works fine when I run playbook while logging in as ansible user.
- hosts: nameservers
user: ansible
tasks:
- name: check hostname
command: hostname
Error,
[root#dev playbooks]# ansible-playbook pingtest.yml
PLAY [nameservers] *********************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************
fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}.
note: i have replaced IPs with x
setuser in playbook
user: ansible
set keypath in ansible configuration file
private_key_file = /home/ansible/.ssh/id_rsa

Ansible with a rotating sudo password

I have a system that has a rotating token based sudo password that changes every 30 seconds or so. I'm using Ansible with privilege escalation. I know I can pass in the --ask-su-pass option - but the problem I've run into is at the time of asking the password is one thing - yet it changes by the time Ansible actually attempts to sudo.
Is there a way to get prompted at the time of sudo-ing?
Ansible Script
---
- name: Install & Setup Docker
hosts: ece
become: true
tasks:
- name: Install Required Packages
yum:
name:
- "yum-utils"
- "device-mapper-persistent-data"
- "lvm2"
state: latest
register: yum_result
- debug:
var: yum_result
verbosity: 2
What happens when I run:
PLAY [Install & Setup Docker]
TASK [Gathering Facts]
********************************************************************************************************************************************************* fatal: [xxxx]: FAILED! => {"changed": false, "module_stderr":
"Shared connection to xxxx closed.\r\n", "module_stdout":
"sudo: a password is required\r\n", "msg": "MODULE FAILURE\nSee
stdout/stderr for the exact error", "rc": 1}
to retry, use: --limit
So obviously I need a password for sudo. The problem is the sudo password is a rotating password so every 30 seconds or so it changes
I can use the --ask-su-pass flag to enter a sudo password - but by the time i get to the first time its used i get:
fatal: [xxxx]: FAILED! => {"msg": "Incorrect sudo password"}
because the password has since changed since I've entered it. Trying to figure out if i can get a prompt at the point where its trying to enter the sudo password on the remote system

ansible win_user, create is fine, but replaying playbook fails

I am able to create a user on a windows server as part of a playbook, but when the playbook is re-run, the create task fails.
I'm trying to work out if I am missing something.
playbook:
---
# vim: set filetype=ansible ff=unix ts=2 sw=2 ai expandtab :
#
# Playbook to configure the environment
- hosts: createuser
tasks:
- name: create user
run_once: true
win_user:
name: gary
password: 'B0bP4ssw0rd123!^'
password_never_expires: true
account_disabled: no
account_locked: no
password_expired: no
state: present
groups:
- Administrators
- Users
if I run the playbook when the user does not exist, the create works fine.
When I re-run, I get:
PLAY [createuser] *******************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************
ok: [dsy-demo-mssql02]
TASK [create user] ******************************************************************************************************************************************************************************************************************
fatal: [dsy-demo-mssql02]: FAILED! => {"changed": false, "failed": true, "msg": "Exception calling \"ValidateCredentials\" with \"2\" argument(s): \"The network path was not found.\r\n\""}
I have verified that I can logon to the server using the created user credentials.
Anyone seen this before, or understand what can be happening?
It looks to me like it might be the
run_once: true
is only telling the task to run once. For the ansible documentation on that delegation you can go here https://docs.ansible.com/ansible/playbooks_delegation.html#run-once

Resources