I'm trying to use ansible (version 2.1.2.0) to create named ssh access across our network of servers. Running ansible from a jump box I'm creating a set of users and creating a private/public key pair with the users module
- user:
name: "{{ item }}"
shell: /bin/bash
group: usergroup
generate_ssh_key: yes
ssh_key_comment: "ansible-generated for {{ item }}"
with_items: "{{ list_of_usernames }}"
From there I would like to copy across the public key to the authorized_users file on each remote server. I'm using the authorized_key_module to fetch and copy the user's generated public key as the authorized_key on the remote servers:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
What I'm finding is that the lookup will fail as it is unable to access the pub file within the user's .ssh folder. It will run happily however if I run ansible as the root user (which for me is not an option I'm willing to take).
fatal: [<ipAddress>]: FAILED! => {"failed": true, "msg": "{{ lookup('file', '/home/testuser/.ssh/id_rsa.pub') }}: the file_name '/home/testuser/.ssh/id_rsa.pub' does not exist, or is not readable"}
Is there a better way to do this other than running ansible as root?
Edit: Sorry I had forgotten to mention that I'm running with become: yes
Edit #2: Below is a condensed playbook of what I'm trying to run, of course in this instance it'll add the authorized_key to the localhost but shows the error I'm facing:
---
- hosts: all
become: yes
vars:
active_users: ['user1','user2']
tasks:
- group:
name: "users"
- user:
name: "{{ item }}"
shell: /bin/bash
group: users
generate_ssh_key: yes
with_items: "{{ active_users }}"
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
become: yes
become_user: "{{ item }}"
with_items:
- "{{ active_users }}"
OK, the problem is with lookup plugin.
It is executed on ansible control host with permissions of user that run ansible-playbook and become: yes don't elevate plugins' permissions.
To overcome this, capture result of user task and use its output in further tasks:
- user:
name: "{{ item }}"
shell: /bin/bash
group: docker
generate_ssh_key: yes
ssh_key_comment: "ansible-generated for {{ item }}"
with_items:
- ansible5
- ansible6
register: new_users
become: yes
- debug: msg="user {{ item.item }} pubkey {{ item.ssh_public_key }}"
with_items: "{{ new_users.results }}"
Although you need to delegate some of this tasks, the idea will be the same.
On most linux/unix machines only two accounts have access to /home/testuser/.ssh/id_rsa.pub: root and testuser, so if you would like to modify those files you need to be either root, or testuser.
You can use privilige escalation using become. You said you don't want to run ansible as root, which is perfectly fine. You haven't mentioned if the user you are running ansible with has sudo access or not. If it does, and using sudo is fine for you then you can simply do:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
become: yes
This by default will run these command as root using sudo, while keeping the rest of the tasks run by the non-root user. This is also the preferred way of doing runs with ansible.
If you don't want this, and you want to keep your user as root-free as possible, then you need to run the command as testuser (and any other user you want to modify). This would mean you still need to patch up the sudoers file to allow your ansible user to transform into any of these users (which can also open up a few security issues - albeit not as much as being able to run anything as root), after which you can do:
- authorized_key:
user: "{{ item }}"
state: present
key: "{{ lookup('file', '/home/{{ item }}/.ssh/id_rsa.pub') }}"
with_items:
- "{{ list_of_usernames }}"
become: yes
become_user: "{{ item }}"
There are some caveats using this approach, so you might want to read the full Privilege Escalation page on ansible's documentation, especially the section on Unpriviliged Users before you try this.
Related
I am having problems with Ansible and adding new users to servers.
Firstly I check if the users are present on the system and if not, then I proceed to create them.
I found a somewhat similar problem here: ansible user module always shows changed
and I was able to fix the changed status when adding a new user in the file userlist with adding a simple salt.
However, the last part, which is the handler, is always performed.
This is how my playbook looks like:
---
- hosts: all
become: yes
vars_files:
- /home/ansible/userlist
tasks:
# check if user exists in system, using the username
# and trying to search inside passwd file
- name: check user exists
getent:
database: passwd
key: "{{ item }}"
loop: "{{ users }}"
register: userexists
ignore_errors: yes
- name: add user if it does not exit
user:
name: "{{ item }}"
password: "{{ 'password' | password_hash('sha512', 'mysecretsalt') }}"
update_password: on_create
loop: "{{ users }}"
when: userexists is failed
notify: change password
handlers:
- name: change user password upon creation
shell: chage -d 0 "{{ item }}"
loop: "{{ users }}"
listen: change password
And here is the simple file called userlist:
users:
- testuser1
- testuser2
- testuser3
- testuser22
When I am running the playbook without changes to the userlist file, everything is fine.
However, if I add a new user to the file, then, when an existing user tries to log in, the system enforces them to change their password, because the handler is always called.
Is there any way to alter the code in a way that the enforcing of changing the password immediately is only performed on newly created users?
There are 2 main issues in your playbook:
userexists is registering each result individually, but you are referencing only the overall "failure" result. Use a debug statement to show the variable to see what I mean
If the handler is notified, you are looping on all users, rather than only those that have "changed" (i.e. have been created)
There's a couple of different approaches to fix this, but I think this might be the most succinct:
tasks:
- name: add user if it does not exist
user:
name: "{{ item }}"
password: "{{ 'password' | password_hash('sha512', 'mysecretsalt') }}"
update_password: on_create
loop: "{{ users }}"
register: useradd
notify: change password
handlers:
- name: change user password upon creation
shell: chage -d 0 {{ item.name }}
loop: "{{ useradd.results }}"
when: item.changed
listen: change password
I'm working in a multi user environment. I want to have some accountability of who ran which plays. So we can't have a common group_vars/all that contains ansible login credentials. I'm not wanting to force my users to use extra vars at the command line for ansible credentials either.
Ideally I'd like to have each user have a specific file off of their /home/user_name/ directory where they set their ansible credentials (I'll use ansible_info.yml so we have something specific to talk about). This would provide me security, as other users not be able to access another users home directory.
Off of the /home/user_name/, would I have to get each user to create a "all" file (as ansible by default looks for a "all" file)? Or could I use a different name for this file (ansible_info.yml)? The contents of the all/ansible_info.yml would be:
ansible_user: end_user_ssh_username
ansible_ssh_pass: end_user_ssh_username_password
ansible_become: yes
ansible_become_user: service_account_name
ansible_become_pass: service_account_name_password
To grab the user_name of who SSH'ed onto the server, I can use any of the three below commands:
command: whoami
{{ lookup('env','USER') }}
command: id -un
After one of the above commands has been run, then I register the variable.
register: logged_in_user
Then I'm a little stuck on what to do next for supplying the rest of the ansible credentials.
Do I use vars_files?
vars_files:
- /home/{{ logged_in_user }}/ansible_info.yml
Or in the play should do I need to state the all of the ansible credentials as variables that refer to the /home/{{ logged_in_user }}/ansible_info.yml file?
Or using "lookup" create an individual file for each Ansible credential that I'm looking for, as I've shown below:
vars:
ansible_user: "{{lookup('file', '/home/{{ logged_in_user }}/ansible_user.yml')}}"
ansible_ssh_pass: "{{lookup('file', '/home/{{ logged_in_user }}/ansible_ssh_pass')}}"
ansible_become_user: "{{lookup('file', '/home/{{ logged_in_user }}/ansible_become_user.yml')}}"
ansible_become_pass: "{{lookup('file', '/home/{{ logged_in_user }}/ansible_become_pass')}}"
Here is was I wrote to get this to pull data from my files. Next I'll see if I can authenticate with it.
Basically what was messing me up was the extra characters from "stdout", so I just made an extra variable to put the results into, and it work.
hosts: localhost
tasks:
name: get the username running the deploy
local_action: command whoami
register: whoami_results
name: show the output of the home_directory results
vars:
whoami_results_stdout: "{{whoami_results.stdout}}"
home_directory: /home/{{ whoami_results_stdout }}
ansible_user: "{{ lookup('file', '{{ home_directory }}/ansible_user.yml') }}"
ansible_ssh_pass: "{{ lookup('file', '{{ home_directory }}/ansible_ssh_pass.yml') }}"
ansible_become: "{{ lookup('file', '{{ home_directory }}/ansible_become.yml') }}"
ansible_become_user: "{{ lookup('file', '{{ home_directory }}/ansible_become_user.yml') }}"
ansible_become_pass: "{{ lookup('file', '{{ home_directory }}/ansible_become_pass.yml') }}"
I tweaked this again, and found a much better solution. I made the contents of group_vars/all become this.
ansible_user: "{{ lookup('file', '~/ansible_user') }}"
ansible_ssh_pass: "{{ lookup('file', '~/ansible_ssh_pass') }}"
ansible_become: "{{ lookup('file', '~/ansible_become') }}"
ansible_become_user: "{{ lookup('file', '~/ansible_become_user') }}"
ansible_become_pass: "{{ lookup('file', '~/ansible_become_pass') }}"
Then we are going to have the end users just have the corresponding files off of their /home/. The end user will need to update these files when they have to update their Active Directory password though (minor issue). The files off of their home directory work with ansible-vault.
This is code of my ansible script .
---
- hosts: "{{ host }}"
remote_user: "{{ user }}"
ansible_become_pass: "{{ pass }}"
tasks:
- name: Creates directory to keep files on the server
file: path=/home/{{ user }}/fabric_shell state=directory
- name: Move sh file to remote
copy:
src: /home/pankaj/my_ansible_scripts/normal_script/installation/install.sh
dest: /home/{{ user }}/fabric_shell/install.sh
- name: Execute the script
command: sh /home/{{ user }}/fabric_shell/install.sh
become: yes
I am running the ansible playbook using command>>>
ansible-playbook send_run_shell.yml --extra-vars "user=sakshi host=192.168.0.238 pass=Welcome01" .
But I don't know why am getting error
ERROR! 'ansible_become_pass' is not a valid attribute for a Play
The error appears to have been in '/home/pankaj/go/src/shell_code/send_run_shell.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- hosts: "{{ host }}"
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
Please guide , what I am doing wrong.
Thanks in advance ...
ansible_become_pass is a connection parameter which you can set as variable:
---
- hosts: "{{ host }}"
remote_user: "{{ user }}"
vars:
ansible_become_pass: "{{ pass }}"
tasks:
# ...
That said, you can move remote_user to variables too (refer to the whole list of connection parameters), save it to a separate host_vars- or group_vars-file and encrypt with Ansible Vault.
Take a look on this thread thread and Ansible Page. I propose to use become_user in this way:
- hosts: all
tasks:
- include_tasks: task/java_tomcat_install.yml
when: activity == 'Install'
become: yes
become_user: "{{ aplication_user }}"
Try do not use pass=Welcome01,
When speaking with remote machines, Ansible by default assumes you are using SSH keys. SSH keys are encouraged but password authentication can also be used where needed by supplying the option --ask-pass. If using sudo features and when sudo requires a password, also supply --ask-become-pass (previously --ask-sudo-pass which has been deprecated).
I want execute a task from my ansible role on another host. Can I do it with delegate_to or I should use other a way?
- name: Create user on remote host
user:
name: "{{ item.1 }}"
state: present
delegate_to: "user#{{ item.0.host }}"
run_once: true
become: True
with_subelements:
- "{{ user_ssh_keys_slave_hosts }}"
- users
when: item.1 is defined
Goal: I want create an user on remote host use login:password and ssh argument = '-o StrictHostKeyChecking=no'
For my inventory file I use next vars:
ansible_connection: ssh
ansible_user: user
ansible_ssh_pass: pass
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
SSH gets configured by ~/.ssh/config. There you can say which authentication type should be users. And there you should put your SSH options. This defines that the connection to a host should be made with a different user name:
Host item-0-host
User user
StrictHostKeyCecking no
RSAAuthentication no
HostName name-of-the-item-0-host
By this you have two host names. The original host name, you can use to manage the host itself. And a new host name, you can use to do the user management.
I am not sure but I think, you can not specify a user name in the value of the delegate_to option.
Btw: do you know the user module of Ansible?
Tnx for your answer. I resolved my question.
ansible_ssh_common_args='-o StrictHostKeyChecking=no' in inventory file.
And
- name: Set authorized key on remote host
authorized_key:
user: "{{ item.1 }}"
state: present
key: "{{ user_ssh_public_key_fingerprint.stdout }}"
delegate_to: "{{ item.0.host }}"
run_once: true
become: True
with_subelements:
- "{{ user_ssh_keys_slave_hosts }}"
- users
when: item.1 is defined
user_ssh_keys_slave_group: "{{ groups['slave_hosts']|default([''])}}"
user_ssh_keys_slave_hosts: "{%- set ssh_user = [] %}{%- for host in user_ssh_keys_slave_group %}
{{- ssh_user.append(dict(host=host, ssh_user=hostvars[host]['slave_hosts']|default([]))) }}
{%- endfor %}
{{- ssh_user -}}"
Im trying to create a user and add to the authorized_keys file.
Here is the ansible code when I try to run I get the following error:
ERROR! this task 'authorized_key' has extra params, which is only allowed in the following modules: command, win_command, shell, win_shell, script, include, include_vars, add_host, group_by, set_fact, raw, meta
- name: Adding user {{ user }}
user: name={{ user }}
group={{ group }}
shell=/bin/bash
password=${password}
groups=sudo
append=yes
generate_ssh_key=yes
ssh_key_bits=2048
ssh_key_file=.ssh/id_rsa
- name: Authorized keys
authorized_key:
user={{ user }}
state=present
manage_dir= yes
key="{{ lookup('file', '/home/vivek/.ssh/id_rsa.pub') }}"
I suspect the problem is with the space here: manage_dir= yes.
Use pure YAML syntax to prevent different kind of issues with key=value syntax.
- name: Authorized keys
authorized_key:
user: "{{ user }}"
state: present
manage_dir: yes
key: "{{ lookup('file', '/home/vivek/.ssh/id_rsa.pub') }}"