run ansible playbook with sudo - ansible

I have a playbook and I want to run it with sudo. It is my ansible playbook:
site.yml
---
- name: Server
hosts: node1
sudo: yes
roles:
- dbserver
When I run it I get this:
ansible-playbook -i hosts site.yml
PLAY [Server] *****************************************************************
GATHERING FACTS ***************************************************************
fatal: [node1] => Missing sudo password
TASK: [dbserver | installing server] ******************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/robe/site.retry
node1 : ok=0 changed=0 unreachable=1 failed=0
Then I add the ansible sudo pass on site.yml:
---
- name: Server
hosts: node1
sudo_pass: ubuntu
roles:
- dbserver
I get this error:
ERROR: sudo_pass is not a legal parameter at this level in an Ansible Playbook
Then my questions are:
Do I have add to each tasks the ansible sudo_pass attribute?
Is there any way to say sudo and sudo_pass in the playbook?

sudo_pass is not something Ansible knows. If you need to enter a sudo password on node1, then you keep sudo: yes in the playbook, and you'll have to give Ansible the sudo password on the commandline when running your playbook:
ansible-playbook -i hosts site.yml -K
Note the -K parameter. It will then ask you for the sudo password before the playbook starts.
(the long version is --ask-sudo-pass by the way. I had to look that up)

Related

How do I use ansible_become_password on different target in the same playbook

My playbook has a task to copy a file from the local box to the remote box and the last task has to use sudo to root. I am not getting it to work.
In my inventory I am trying to use I am just trying to get it to work then I can lock it down with ansible vault. I just need to see it work first.
[logserver]
mylogserver ansible_ssh_user=myuser ansible_become_password=mypassword
In my playbook the last task using the -host param to do the work on the remote box, earlier task copies file to remote server but then I add a remote host to get the work done.
# Cat file and append destination file
- name: cat files to destination file
hosts: mylogserver
gather_facts: no
become_exe: "sudo su - root"
tasks:
- name: cat contents and append to destination file
shell:
cmd: /usr/bin/cat /var/tmp/test_file.txt >> /etc/some/target_file.txt
For example, the inventory
shell> cat hosts
test_11 ansible_ssh_user=User1 ansible_become_password=mypassword
[logserver]
test_11
and the playbook
shell> cat pb.yml
- hosts: logserver
gather_facts: false
become_method: su
become_exe: sudo su
become_user: root
become_flags: -l
tasks:
- command: whoami
become: true
register: result
- debug:
var: result.stdout
work as expected
shell> ansible-playbook -i hosts pb.yml
PLAY [logserver] ******************************************************************************
TASK [command] ********************************************************************************
changed: [test_11]
TASK [debug] **********************************************************************************
ok: [test_11] =>
result.stdout: root
PLAY RECAP ************************************************************************************
test_11: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Notes:
Configuration of the method, e.g. become_method: su, is missing. See DEFAULT_BECOME_METHOD. The default is 'sudo'. For details see Using Become Plugins
See details of the plugin from the command line shell> ansible-doc -t become su
Use become_flags to specify options to pass to su, e.g. become_flags: -l
Use become_user to specify the user you 'become' to execute the task, e.g. become_user: root. This example is redundant. 'root' is the default
Specify become: true if the task shall use the configured escalation. The default is 'false'. See DEFAULT_BECOME
Configure sudoers, e.g.
- command: grep User1 /usr/local/etc/sudoers
become: true
register: result
- debug:
var: result.stdout
gives
TASK [debug] ***************************************************************
ok: [test_11] =>
result.stdout: User1 ALL=(ALL) ALL
Encrypt the password
Remove the password from inventory hosts and put it into a file, e.g. in host_vars
shell> cat hosts
test_11 ansible_ssh_user=User1
[logserver]
test_11
shell> cat host_vars/test_11/ansible_become_password.yml
ansible_become_password: mypassword
Encrypt the password
shell> ansible-vault encrypt host_vars/test_11/ansible_become_password.yml
Encryption successful
shell> cat host_vars/test_11/ansible_become_password.yml
$ANSIBLE_VAULT;1.1;AES256
35646364306161623262653632393833323662633738323639353539666231373165646238636236
3462386536666463323136396131326333636365663264350a383839333536313937353637373765
...
Test the playbook
shell> ansible-playbook -i hosts pb.yml
PLAY [logserver] ******************************************************************************
TASK [command] ********************************************************************************
changed: [test_11]
TASK [debug] **********************************************************************************
ok: [test_11] =>
result.stdout: root
...

This command has to be run under the root user. But I am root only

I have a simple playbook that tries to install packages.
My task is failing(see output).
I can ping the host, and manually I can run the command as the super user(tco).
my ansible.cfg
[defaults]
inventory = /Users/<myuser>/<automation>/ansible/inventory
remote_user = tco
packages
packages:
- yum-utils
- sshpass
playbook
---
- hosts: all
vars_files:
- vars/packages.yml
tasks:
- name: testing connection
ping:
remote_user: tco
- name: Installing packages
yum:
name: "{{ packages }}"
state: present
Running playbook:
ansible-playbook my-playbook.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
Output:
ansible-playbook register_sys_rh.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
BECOME password:
PLAY [all] ******************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [testing connection] ***************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [Installing packages] **************************************************************************************************************************************************
fatal: [xx.xxx.13.105]: FAILED! => {"changed": false, "msg": "This command has to be run under the root user.", "results": []}
PLAY RECAP ******************************************************************************************************************************************************************
xx.xxx.13.105 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
inventory:
ansible-inventory --list | jq '.master'
{
"hosts": [
"xx.xxx.13.105"
]
}
I have copied my id_rsa.pub to the host already. I cannot loging to the host without a password.
I can log in and do sudo su or run any other command that needs root privilege.
[tco#control-plane-0 ~]$ whoami
tco
[tco#control-plane-0 ~]$ hostname -I
xx.xxx.13.105 192.168.122.1
[tco#control-plane-0 ~]$ sudo su
[sudo] password for tco:
[root#control-plane-0 tco]#
I explicitly override user, sudo_method through ansible_cli, no idea what I am doing wrong here.
Thanks in advance.
Fixed it.
But, I need to understand the Ansible concept better.
I changed ansible.cfg to this(changed become_user to root)
[defaults]
inventory = <my-inventory-path>
remote_user = tco
[privilege_escalation]
become=True
become_method=sudo
become_ask_pass=False
become_user=root
become_pass=<password>
And, running it like this:
ansible-playbook my-playbook.yml --limit master
this gives me an error:
FAILED! => {"msg": "Missing sudo password"}
So, I run like this:
ansible-playbook my-playbook.yml --limit master --ask-become-pass
and when a password is prompted I provide tco password not sure what is the password for the root user is.
And this works.
Not sure why cfg file password is not working, even though I provide the same password when prompted.
As per my understanding, when I say become_user and become_pass that is what ansible uses to run privilege commands. But, here I am saying remote_user: tco and become_user:root
This error is because the playbook is unable to run on the remote nodes as a root user.
We can modify the ansible.cfg file in order to resolve this issue.
Steps:
Open the ansible.cfg, default command is: sudo vim /etc/ansible/ansible.cfg
Search for regular expression: /privilege_escalation ( / is basically searching for the keyword that follows in VIM)
Press I (Insert mode), uncomment following lines:
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
Note: If the .cfg lines are not modified, the corresponding values can be passed via Terminal during the playbook execution(as specified in the question itself).
Run the playbook again with: ansible-playbook playbookname.yml
Note, if there is no ssh authentication pre-established between the Ansible controller and the nodes, appending -kK is required (this asks for the privileged user password for the remote nodes ) i.e. ansible-playbook playbookname.yml -kK
This will require you to input password for the SSH connection, as well as the BECOME password(the password for the privileged user on the node)

Why ansible doesn't do the task at first attempt in Gitlab?

I am executing some ansible playbooks via gitlab-ci and what I could see is
Ansible playbook executing successfully through pipeline, but it doesn't produce the output it is intended to do
When I retry the gitlab job, it produces the output I needed.
This is one of the many playbooks I am executing through gitlab:
1_ca.yaml
---
- hosts: 127.0.0.1
connection: local
tasks:
- name: Create ca-csr.json
become: true
copy:
dest: ca-csr.json
content: '{"CN":"Kubernetes","key":{"algo":"rsa","size":2048},"names":[{"C":"US","L":"Portland","O":"Kubernetes","OU":"CA","ST":"Oregon"}]}'
- name: Create ca-config.json
become: true
copy:
dest: ca-config.json
content: '{"signing":{"default":{"expiry":"8760h"},"profiles":{"kubernetes":{"usages":["signing","key encipherment","server auth","client auth"],"expiry":"8760h"}}}}'
- name: Create the ca.pem & ca-key.pem
# become: true
shell: |
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
Basically what does this do is, it creates some certs I needed.
But in the first attempt even though pipeline passes and it doesn't generate these certs. When I restart (running the same job for the second time) that particular job in gitlab it generates these certs.
Why this is happening?
This is how my .gitlab-ci.yaml looks like:
Create-Certificates:
stage: ansible-play-books-create-certs
retry:
max: 2
when:
- always
script:
- echo "Executing ansible playbooks for generating certficates"
- ansible-playbook ./ansible-playbooks/1_ca/1_ca.yaml
- ansible-playbook ./ansible-playbooks/1_ca/2_admin.yaml
- ansible-playbook ./ansible-playbooks/1_ca/3_kubelet.yaml
- ansible-playbook ./ansible-playbooks/1_ca/4_kube-controller.yaml
- ansible-playbook ./ansible-playbooks/1_ca/5_kube-proxy.yaml
- ansible-playbook ./ansible-playbooks/1_ca/6_kube-scheduler.yaml
- ansible-playbook ./ansible-playbooks/1_ca/7_kube-api-server.yaml
- ansible-playbook ./ansible-playbooks/1_ca/8_service-account.yaml
- ansible-playbook ./ansible-playbooks/1_ca/9_distribute-client-server-cert.yaml
# when: delayed
# start_in: 1 minutes
tags:
- banuka-gcp-k8s-hard-way
PS: These ansible playbooks are executing in the ansible host itself, not in remote servers. So I can log into the ansible master server and check if these files are created or not.
running your playbook without the last shell module produces the follwing output:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [127.0.0.1] **************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [127.0.0.1]
TASK [Create ca-csr.json] *****************************************************************************************************************************************************************************************
[WARNING]: File './ca-csr.json' created with default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [127.0.0.1]
TASK [Create ca-config.json] **************************************************************************************************************************************************************************************
[WARNING]: File './ca-config.json' created with default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [127.0.0.1]
PLAY RECAP ********************************************************************************************************************************************************************************************************
127.0.0.1 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and checking the existence:
$ ls ca* -al
-rw------- 1 root root 155 Aug 17 02:48 ca-config.json
-rw------- 1 root root 129 Aug 17 02:48 ca-csr.json
so although it's quite dirty way of writing a playbook - it works.
Why is it dirty ? :
you're not using any inventory
you should use local_action and not connection: local for local tasks
you are misusing ansible that is multi-node configuration management to do a bash script task
so in conclusion - there's nothing wrong with your ansible playbook - or maybe the file permissions (?) and if it does not run - you should look more in the gitlab-ci direction.
you need to provide more details on Gitlab-CI setup but - maybe the stage is not correct ?

Gitlab CI with Ansible

I am creating a pipeline which is automatically triggered when I push my code on gitlab.com.
The project is about the provisioning of a machine.
Here my .gitlab-ci.yml file
ansible_build:
image: debian:10
script:
- apt-get update -q -y
- apt-get install -y ansible git openssh-server keychain
- service ssh stop
- service ssh start
- cp files/<ad-hoc-created-key> key.pem && chmod 600 key.pem
- eval `keychain --eval` > /dev/null 2>&1
- ssh-add key.pem
- ansible-galaxy install -r requirements.yml
- ansible-playbook provision.yml --inventory hosts --limit local
When I push my code, the gitlab environment starts running all commands, but then it exits with the following error
$ ansible-playbook provision.yml --inventory hosts --limit local
PLAY [Provision step] **********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [127.0.0.1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Host key verification failed.", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/builds/team/ansible/provision.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=0 changed=0 unreachable=1 failed=0
In my local PC, I solved it using the ssh-copy-id <path-to-the-key> <localhost> command, but I don't know how to solve it for gitlab-ci, given that it's not an environment which I can control.
I tried also to replace the 127.0.0.1 IP address with localhost.
ansible-playbook provision.yml --inventory hosts --limit localhost
Then it fails:
ansible-playbook provision.yml --inventory hosts --limit localhost
[WARNING] Ansible is being run in a world writable directory (/builds/teamiguana/minerva-ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
[WARNING]: Found both group and host with same name: localhost
PLAY [Provision step] **********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.\r\nroot#localhost: Permission denied (publickey,password).", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/builds/teamiguana/minerva-ansible/provision.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0
I don't have experience setting up similar - but my first thought would be to check
What system user is Gitlab trying to SSH as?
What system user has the corresponding public keys on the remote hosts?
You can override which user Ansible connects with either in the playbooks, or via --user <user> command-line flag, see https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#cmdoption-ansible-playbook-u.
Though maybe I'm misunderstanding, because I just noticed that you've set --limit local in your command?
You may try to add env variable ANSIBLE_TRANSPORT with value "local" to your ansible-playbook command, like this:
ANSIBLE_TRANSPORT=local ansible-playbook provision.yml --inventory hosts --limit local

Using --become for ansible_connection=local

With a personal user account (userx) I run the ansible playbook on all my specified hosts. In ansible.cfg the remote user (which can become root) to be used is:
remote_user = ansible
For the remote hosts this all works fine. It connects as the user Ansible, and executes all tasks as wished for, also changing information (like /etc/ssh/sshd_config) which requires root rights.
But now I also want to execute the playbook on the Ansible host itself. I put the following in my inventory file:
localhost ansible_connection=local
which now indeed executes on localhost. But as userx, and this results in "Access denied" for some task it needs to do.
This is of course somewhat expected, since remote_user tells something about remote, not the local user. But still, I expected that the playbook would --become locally too, to execute the tasks as root (e.g. sudo su -). It seems no to do that.
Running the playbook with --become -vvv tells me
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: userx
and it seems not to try to execute the tasks with sudo. And without using sudo, the task fails.
How can I tell ansible to to use sudo / become on the local connection too?
Nothing special is required. Proof:
The playbook:
---
- hosts: localhost
gather_facts: no
connection: local
tasks:
- command: whoami
register: whoami
- debug:
var: whoami.stdout
The execution line:
ansible-playbook playbook.yml --become
The result:
PLAY [localhost] ***************************************************************************************************
TASK [command] *****************************************************************************************************
changed: [localhost]
TASK [debug] *******************************************************************************************************
ok: [localhost] => {
"changed": false,
"whoami.stdout": "root"
}
PLAY RECAP *********************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
The ESTABLISH LOCAL CONNECTION FOR USER: message will always show the current user, as it the account used "to connect".
Later the command(s) called from the module get(s) executed with elevated permissions.
Of course, you can add become: yes on either play level or for individual tasks.

Resources