Executing task after being logged as root in ansible - bash

I am trying to subsequently run a task after I am connected using ssh. I am connecting using this in my playbook
- name: connect using password # task 1; this task set/connect me as root
expect:
command: ssh -o "StrictHostKeyChecking=no" myuser#********
responses:
"password:":
-my password
-my password
delegate_to: localhost
That task is fine and I am able to see that I am connected. The problem now is that when I try to run subsequent tasks for example:
- name: copy folder # task 2 in the same playbook
copy:
src: "files/mylocalfile.txt"
dest: "etc/temp"
mode: "0777"
I have the following message:
"msg: etc/temp not writable"
How do I do to continue executing the remaining task as root that got connected in task1?

I believe this might not be an ansible question, but a linux one.
Is your user in /etc/wheel?
Ansible has the direective become, which will let you execute a task as root, if the user you are connecting with is allowed to escalate privileges. The task you want to run with privileges would be something like:
- name: copy folder # task 2 in the same playbook
become: yes
copy:
src: "files/mylocalfile.txt"
dest: "etc/temp"
mode: "0777"
you can use become_user if you need to specify the user you want to run the task as, and if you have a password for the privileged user, you can ask ansible to prompt for the password when running ansible-playbook, using --ask-become-password.
The following link offers documentation about privilege escalation in ansible:
https://docs.ansible.com/ansible/latest/user_guide/become.html

Related

How to run a task from a playbook on a specific host?

I'm writing an Ansible playbook to manage backup and I want two different tasks:
- name: Setup local machine for backup
cron:
cron_file: /etc/cron.d/backup
hour: 4
minute: 0
job: /root/do_backup.sh
state: present
name: backup
- name: Setup backup server for new machine
shell:
cmd: "mkdir /backups/{{inventory_hostname}}"
Is it possible to tell ansible that second task is intended to be executed on another machine of my inventory?
I don't want a dedicated playbook, because some later tasks should be executed after the task on backup server.
I'm answering my own question: task delegation is what I'm looking for:
- name: Setup backup server for new machine
delegate_to: backup-server
shell:
cmd: "mkdir /backups/{{inventory_hostname}}"

How to run playbook on my application server using sudo?

I am writing a playbook for my Application tomcat node It will copy, deploy and stop/start tomcats.
I have a hop box serverA, another hop box serverB and tomcat node tomcatC. Manually using putty i use below steps to get on to the tomcat
Login to serverA using userId1
ssh to serverB using userId2
ssh to tomcatC using userId1
sudo to tomcat user.
Also I am able to directly ssh to tomcatC from serverA and my Ansible master is also serverA from where I am running the playbooks.
How do i run my playbook for this? Below is my playbook i am using as of now but it's not working.
ansible-playbook -i my-inventory my-V3.yml --tags=download,copy,deploy -e release_version=5.7 -e target_env=tomcatC -u userId1 --ask-pass. AND my-v3.yml looks like below -
hosts: '{{ target_env }}'
#serial: 1
remote_user: userId1
become: yes
become_user: tomcat
Getting this Error NOW -
GATHERING FACTS ***************************************************************
fatal: [tomcatC] => Missing become password
You can set the user a command is run as like so:
- cron:
name: "Clear Root Mail"
minute: "0"
hour: "22"
job: "sudo rm /var/spool/mail/root"
user: myuser
Or use become: true like so:
- name: Start Server
shell: "nohup /home/myuser/StartServer.sh &"
become: true
You can have shell scripts run the commands you need to run as well that ansible calls from the jump box you have. Your problem looks like you dont have the correct ssh key applied though.

How to switch out of root acount during server set up?

I need to automate the deployment of some remote Debian servers. These servers come with only the root account. I wish to make it such that the only time I ever need to login as root is during the set up process. Subsequent logins will be done using a regular user account, which will be created during the set up process.
However during the set up process, I need to set PermitRootLogin no and PasswordAuthentication no in /etc/ssh/sshd_config. Then I will be doing a service sshd restart. This will stop the ssh connection because ansible had logged into the server using the root account.
My question is: How do I make ansible ssh into the root account, create a regular user account, set PermitRootLogin no and PasswordAuthentication no, then ssh into the server using the regular user account and do the remaining set up tasks?
It is entirely possible that my set-up process is flawed. I will appreciate suggestions.
You can actually manage the entire setup process with Ansible, without requiring manual configuration prerequisites.
Interestingly, you can change ansible_user and ansible_password on the fly, using set_fact. Remaining tasks executed after set_fact will be executed using the new credentials:
- name: "Switch remote user on the fly"
hosts: my_new_hosts
vars:
reg_ansible_user: "regular_user"
reg_ansible_password: "secret_pw"
gather_facts: false
become: false
tasks:
- name: "(before) check login user"
command: whoami
register: result_before
- debug: msg="(before) whoami={{ result_before.stdout }}"
- name: "change ansible_user and ansible_password"
set_fact:
ansible_user: "{{ reg_ansible_user }}"
ansible_password: "{{ reg_ansible_password }}"
- name: "(after) check login user"
command: whoami
register: result_after
- debug: msg="(after) whoami={{ result_after.stdout }}"
Furthermore, you don't have to fully restart sshd to cause configuration changes to take effect, and existing SSH connections will stay open. Per sshd(8) manpage:
sshd rereads its configuration file when it receives a hangup signal, SIGHUP....
So, your setup playbook could be something like:
login initially with the root account
create the regular user and set his password or configure authorized_keys
configure sudoers to allow regular user to execute commands as root
use set_fact to switch to that account for the rest of the playbook (remember to use become: true on tasks after this one, since you have switched from root to regular user. you might even try executing a test sudo command before locking root out)
change sshd configuration
execute kill -HUP<sshd_pid>
verify by setting ansible_user back to root, fail if login works
You probably just want to make a standard user account and add it to sudoers. You could then run ansible with the standard user and if you need a command to run as root, you just prefix with command with sudo.
I wrote an article about setting up a deploy user
http://www.altmake.com/2013/03/06/secure-lamp-setup-on-amazon-linux-ami/

How to become a user at remote machine when running a role in Ansible playbook

I know this has been discussed in other questions and I have tried various options but nothing seems to be solving my issue.
I am starting to learn Ansible and trying to run a playbook. I have one role which have tasks in it. I want to run the whole playbook as user john on remote machine. Playbook begins with a copying task. Everything was working fine till I started using become and become_user. I tried running the role as user john by specifying in major playbbok :
---
- hosts: target-machine.com
roles:
- role: installSoftware
become: yes
become_user: john
Then when executing the playbook, I run the following :
ansible-playbook -s major.yml -K
which prompts me for
SUDO password:
I enter the password of user john which exists in the remote target machine. As soon as it starts running playbook, it hangs at the task which requires user as john and complains :
fatal: [target.com]: FAILED! => {"failed": true, "msg": "Failed to get information on remote file (/home/john/software.zip): MODULE FAILURE"}
I also tried the following:
---
- hosts: target-machine.com
become: yes
become_user: john
roles:
- role: installSoftware
so as to run the whole playbook as user john. It again asks me for the SUDO password and then after 5 minutes complains:
fatal: [openam.ansible-target.com]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "\r\nSorry, try again .\r\n[sudo via ansible, key=kchjtpjorgnvaksspqvgwzzkgtjnxsyv] password: \r\nsudo: 1 incorrect password attempt\r\n", "msg": "MODULE FAILURE"}
although the password I have entered is correct. On SO it was suggested to increase the SSH timeout which I did in ansible.cfg file:
[ssh_connection]
ssh_args = -o ServerAliveInterval=60 -o ControlMaster=auto -o ControlPersist=1m
All I want to mirror is what I would run on target machine:
su john
Enter Password:
john$
Any help will be appreciated. Thank You.
By default, become uses sudo, not su. As a result, the password Ansible is prompting you for is the password of the user you're logged-in as, not the user you're changing to. You can change this by setting become_method to su.
If you want to always run all your tasks on that host as a different user, you probably instead want to set ansible_user. See the inventory documentation for more details on that.

ansible identify ssh user in ansible vars

In some ansible script I'm geting a
rsync: mkdir "/var/www/xxx" failed: Permission denied
I need to check what is the user my ansible is using in the target VM.
How can I print the user with a debug: line ?
I look for something like the $ id unix command to debug the permission Pb.
Ansible will always default to the current user(in the shell) and if you want to connect to a remote machine using a different user, you can use the remote_user in your ansible playbook.
See: http://docs.ansible.com/ansible/intro_configuration.html#remote-user for more details.
If you want to run a shell command and capture the output:
- name: "Run a shell command"
shell: /usr/bin/id
register: result
- name: Print the value of result
debug: var=result
or
- name: Print the user id using the ansible_user_id fact
debug: msg="{{ansible_user_id}}"

Resources