I have a simple playbook that I run on new managed nodes for Ansible
the playbook has 3 roles : create ansible admin user on destination host , copy ssh key , sets sudo no passwd for ansible user
I have Rhel based nodes and also debian based nodes
for Rhel I use root , for debian root not used by default and I keep it that way so I have a different admin user called sysadmin
I am trying to find a way that the playbook will identify the OS and choose either root or sysadmin user to run the Play , and also use a proper password from a file in ansible vault
thanks
this is the playbook
name: init managed node
user: root
hosts: init_clients
become: yes
gather_facts: yes
ignore_errors: no
vars:
user: ansible-admin
passwd: password-hash
roles:
create_admin_user
set_authorized_key
set_no_pass
100% real code. But if I didn't have it at my fingertips, I would skip answering this question, as there's no indication of any effort on your part.
- name: Set the system user name for Ubuntu
set_fact:
linux_system_user: ubuntu
when: ansible_os_family == 'Debian'
- name: Set the system user name for CentOS
set_fact:
linux_system_user: centos
when: ansible_distribution == 'CentOS'
- name: Set the system user name for RedHat
set_fact:
linux_system_user: root
when: ansible_distribution == 'RedHat'
My issue is that Ansible still needs to run the first Playbook as some user
if that user is the same for all systems (root for example).
That's ok if the user is not the same.
I have to run the playbook once for Rhel and once for Debian and then change the "user" statement to a differener user.
Related
I'm using Ansible to install an agent on Linux servers. There are different install procedures based on if the system is running systemd or initd. I created a role for both install procedures, but I want to see if the server is running systemd or initd first and then run the corresponding role. Below is the code I have created. Will this type of conditional work this way or am I missing the mark?
tasks:
- name: check if running initd or systemd and role the correct role
command: pidof systemd
register: pid_systemd
- name: check if running initd or systemd and role the correct role
command: pidof /sbin/init
register: pid_initd
- include_role:
name: install-appd-machine-agent-initd
when: pid_initd.stdout == '1'
- include_role:
name: install-appd-machine-agent-systemd
when: pid_systemd.stdout == '1'
Ansible collects facts of a system using gather_facts via setup module. This provides a magic variable called ansible_service_mgr. This variable can be used to conditionally execute tasks.
For example, to run your roles conditionally:
tasks:
- include_role:
name: install-appd-machine-agent-initd
when: ansible_service_mgr == "sysvinit"
- include_role:
name: install-appd-machine-agent-systemd
when: ansible_service_mgr == "systemd"
I have an Ansible playbook to update my Debian based servers. For simplicity and security reasons, I don't want to use a vault for the passwords and I also don't want to store them in a publically accessible config file. So I ask for the password for every client with
become: yes
become_method: sudo
Now, when the playbook runs, it seems the first thing Ansible does is ask for the sudo password, but I don't know for which server (the passwords are different). Is there a way to get Ansible to print the current host name before it asks for the password?
The update playbook is similar to this:
---
- hosts:
all
gather_facts: no
vars:
verbose: false
log_dir: "log/dist-upgrade/{{ inventory_hostname }}"
pre_tasks:
- block:
- setup:
rescue:
- name: "Install required python-minimal package"
raw: "apt-get update && apt-get install -y --force-yes python-apt python-minimal"
- setup:
tasks:
- name: Update packages
apt:
update_cache: yes
upgrade: dist
autoremove: yes
register: output
- name: Check changes
set_fact:
updated: true
when: not output.stdout | search("0 upgraded, 0 newly installed")
- name: Display changes
debug:
msg: "{{ output.stdout_lines }}"
when: verbose or updated is defined
- block:
- name: "Create log directory"
file:
path: "{{ log_dir }}"
state: directory
changed_when: false
- name: "Write changes to logfile"
copy:
content: "{{ output.stdout }}"
dest: "{{ log_dir }}/dist-upgrade_{{ ansible_date_time.iso8601 }}.log"
changed_when: false
when: updated is defined
connection: local
(source: http://www.panticz.de/Debian-Ubuntu-mass-dist-upgrade-with-Ansible)
Your above become configuration does not make ansible ask you for a become password: it just advises it to use become with the sudo method (which will work without any password if your have the correct keys configured for example).
If you are asked for a become password, it's because (it's a guess but I'm rather confident...) you used the --ask-become-pass option when running ansible-playbook.
In this case, you are prompted only once at the beginning of the playbook operations and this default become password will be used on all servers you connect to except if you defined an other one in your inventory for a specific host/group.
If you have different become passwords depending on your machines, you don't really have an other option: you need to declare those passwords in your inventory (and it is strongly advised to use ansible-vault encryption) or use some other mechanisms to get them out of an external application (hashicorp vault, dynamic inventory, cyberark...)
I have a playbook which needs to be run based on the operating System.
UseCase: Lets assume there is a service that is running.
On Linux we can check if it is installed and running using the
systemctl status application.service
While command and on windows we will be using the
sc query "ServiceName" | find "RUNNING"
Now we have to install it based on the output of the above a commands which requires us to segregate the playbook based on the OS.
Classic Example: Create a directory based on the OS
- name: Install QCA Agent on Linux targets
hosts: all
gather_facts: true
remote_user: root
tasks:
- name: Create Directory for Downloading Qualys Cloud Agent
sudo: yes
sudo_user: root
file:
path: /usr/q1/
state: directory
owner: root
group: root
mode: 0777
recurse: no
- name: Create Directory for Downloading Qualys Cloud Agent
win_file:
path: c:\q1
state: directory
owner: Administrator
group: Administrator
mode: 0777
recurse: no
The playbook will alwayz be successful only if one of the condition is met and it is whether it is Windows or Unix OS. I can alwayz add a condition which will prompt based on:
when: ansible_distribution == 'Redhat' or ansible_distribution == 'CentOS'
However what i would like to achieve is based on a condition it should trigger my playbook.yml file.
name: Load a variable file based on the OS type, or a default if not found. Using free-form to specify the file.
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution }}.yaml"
- "{{ ansible_os_family }}.yaml"
- default.yaml
https://docs.ansible.com/ansible/2.5/modules/include_vars_module.html?highlight=with_first_found
I would like to know if there is a better example explaining the same that i could implement or if there are other ways to achieve the same.
Thank you,
The example you show from the Ansible docs is pretty much the best practice and is common in many playbooks (and roles for that matter) that deal with multiple OSes. If you have code that is different (instead of the variable example here), you'll be using include_tasks instead of include_vars, but the concept is the same.
A recurring theme that's in my ansible playbooks is that I often must execute a command with sudo privileges (sudo: yes) because I'd like to do it for a certain user. Ideally I'd much rather use sudo to switch to that user and execute the commands normally. Because then I won't have to do my usual post commands clean up such as chowning directories. Here's a snippet from one of my playbooks:
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
sudo: yes
- name: change perms
file: dest={{ dst }} state=directory mode=0755 owner=some_user
sudo: yes
Ideally I could run commands or sets of commands as a different user even if it requires sudo to su to that user.
With Ansible 1.9 or later
Ansible uses the become, become_user, and become_method directives to achieve privilege escalation. You can apply them to an entire play or playbook, set them in an included playbook, or set them for a particular task.
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
become: yes
become_user: some_user
You can use become_with to specify how the privilege escalation is achieved, the default being sudo.
The directive is in effect for the scope of the block in which it is used (examples).
See Hosts and Users for some additional examples and Become (Privilege Escalation) for more detailed documentation.
In addition to the task-scoped become and become_user directives, Ansible 1.9 added some new variables and command line options to set these values for the duration of a play in the absence of explicit directives:
Command line options for the equivalent become/become_user directives.
Connection specific variables which can be set per host or group.
As of Ansible 2.0.2.0, the older sudo/sudo_user syntax described below still works, but the deprecation notice states, "This feature will be removed in a future release."
Previous syntax, deprecated as of Ansible 1.9 and scheduled for removal:
- name: checkout repo
git: repo=https://github.com/some/repo.git version=master dest={{ dst }}
sudo: yes
sudo_user: some_user
In Ansible 2.x, you can use the block for group of tasks:
- block:
- name: checkout repo
git:
repo: https://github.com/some/repo.git
version: master
dest: "{{ dst }}"
- name: change perms
file:
dest: "{{ dst }}"
state: directory
mode: 0755
owner: some_user
become: yes
become_user: some user
In Ansible >1.4 you can actually specify a remote user at the task level which should allow you to login as that user and execute that command without resorting to sudo. If you can't login as that user then the sudo_user solution will work too.
---
- hosts: webservers
remote_user: root
tasks:
- name: test connection
ping:
remote_user: yourname
See http://docs.ansible.com/playbooks_intro.html#hosts-and-users
A solution is to use the include statement with remote_user var (describe there : http://docs.ansible.com/playbooks_roles.html) but it has to be done at playbook instead of task level.
You can specify become_method to override the default method set in ansible.cfg (if any), and which can be set to one of sudo, su, pbrun, pfexec, doas, dzdo, ksu.
- name: I am confused
command: 'whoami'
become: true
become_method: su
become_user: some_user
register: myidentity
- name: my secret identity
debug:
msg: '{{ myidentity.stdout }}'
Should display
TASK [my-task : my secret identity] ************************************************************
ok: [my_ansible_server] => {
"msg": "some_user"
}
I want to use Ansible to disable selinux in some remote servers. I don't know yet the full list of the servers, it will come from time to time.
That would be great if the ssh-copy-id phase would be integrated somehow in the playbook - you would expect that from an automation system ? I don't mind getting asked for the password one time per server.
With various reading, I understand I can run a local_action in my task:
---
- name: Disable SELinux
hosts: all
remote_user: root
gather_facts: False
tasks:
- local_action: command ssh-copy-id {{remote_user}}#{{hostname}}
- selinux:
state: disabled
However:
It fails because {{remote_user}} and {{hostname}} are not accessible in this context.
I need to gather_factsto False, because it's executed before local_action
Any idea if that's possible within Ansible playbooks ?
You may try this:
- hosts: all
gather_facts: no
tasks:
- set_fact:
rem_user: "{{ ansible_user | default(lookup('env','USER')) }}"
rem_host: "{{ ansible_host }}"
- local_action: command ssh-copy-id {{ rem_user }}#{{ rem_host }}
- setup:
- selinux:
state: disabled
Define remote user and remote host first, then make local action, then enforce fact gathering with setup.