I am writing a playbook for my Application tomcat node It will copy, deploy and stop/start tomcats.
I have a hop box serverA, another hop box serverB and tomcat node tomcatC. Manually using putty i use below steps to get on to the tomcat
Login to serverA using userId1
ssh to serverB using userId2
ssh to tomcatC using userId1
sudo to tomcat user.
Also I am able to directly ssh to tomcatC from serverA and my Ansible master is also serverA from where I am running the playbooks.
How do i run my playbook for this? Below is my playbook i am using as of now but it's not working.
ansible-playbook -i my-inventory my-V3.yml --tags=download,copy,deploy -e release_version=5.7 -e target_env=tomcatC -u userId1 --ask-pass. AND my-v3.yml looks like below -
hosts: '{{ target_env }}'
#serial: 1
remote_user: userId1
become: yes
become_user: tomcat
Getting this Error NOW -
GATHERING FACTS ***************************************************************
fatal: [tomcatC] => Missing become password
You can set the user a command is run as like so:
- cron:
name: "Clear Root Mail"
minute: "0"
hour: "22"
job: "sudo rm /var/spool/mail/root"
user: myuser
Or use become: true like so:
- name: Start Server
shell: "nohup /home/myuser/StartServer.sh &"
become: true
You can have shell scripts run the commands you need to run as well that ansible calls from the jump box you have. Your problem looks like you dont have the correct ssh key applied though.
Related
I am trying to subsequently run a task after I am connected using ssh. I am connecting using this in my playbook
- name: connect using password # task 1; this task set/connect me as root
expect:
command: ssh -o "StrictHostKeyChecking=no" myuser#********
responses:
"password:":
-my password
-my password
delegate_to: localhost
That task is fine and I am able to see that I am connected. The problem now is that when I try to run subsequent tasks for example:
- name: copy folder # task 2 in the same playbook
copy:
src: "files/mylocalfile.txt"
dest: "etc/temp"
mode: "0777"
I have the following message:
"msg: etc/temp not writable"
How do I do to continue executing the remaining task as root that got connected in task1?
I believe this might not be an ansible question, but a linux one.
Is your user in /etc/wheel?
Ansible has the direective become, which will let you execute a task as root, if the user you are connecting with is allowed to escalate privileges. The task you want to run with privileges would be something like:
- name: copy folder # task 2 in the same playbook
become: yes
copy:
src: "files/mylocalfile.txt"
dest: "etc/temp"
mode: "0777"
you can use become_user if you need to specify the user you want to run the task as, and if you have a password for the privileged user, you can ask ansible to prompt for the password when running ansible-playbook, using --ask-become-password.
The following link offers documentation about privilege escalation in ansible:
https://docs.ansible.com/ansible/latest/user_guide/become.html
My Ansible playbook fails, $USER and $HOME are not set correctly on the remote server.
I have a user called lala on remote-server.
Most tasks get executed as user root, directly via ssh:
[all:vars]
ansible_user = root
But this tasks should get executed as user lala via ssh:
- name: migrate
command: /home/lala/bin/manage.py migrate
remote_user: lala
This fails. The uid of the remote process is from lala, but the environment variables like $HOME $USER are still from user root.
I would like ansible to connect to user lala via ssh directly.
I can see this clearly if I use -vvvv:
<coffee-and-sugar.club> SSH: EXEC ssh -vvv ... -o 'User="root"' ...
How to make ansible connect via ssh lala#remote-server?
I found a solution. I don't use remote_user, instead I set the variable ansible_user for the task like this:
- name: migrate
command: /home/lala/bin/manage.py migrate
vars:
ansible_user: lala
I have tried connecting to a remote "custom" Linux vm, and copied my ssh public ssh it, yet, I'm able to get Ansible to ping / connect it, as I keep getting remote host unreachable. "I think its because its running custom version of linux and ssh auth might be behaving differently"
The following remote ssh command works
# ssh -t user#server_ip "show vd"
password: mypassword
I'm trying to convert this above command to an Ansible playbook
---
- hosts: server_ip
gather_facts: no
remote_user: root
tasks:
- name: check devise status
shell: ssh -i server_ip "show vd"
register: output
- expect:
command: sh "{{ output.stdout }}"
responses:
(?i)password: user
I'm unable to get it to work, and I'm not sure if this is the write way of doing it, your input is highly appreciated it.
Whenever you have ssh connection problems you should add the -vvvv parameter to your ansible-playbook command line. That way it will give detailed information about the ssh connection and errors.
I have ansible installed on host1. There is file on host2 which I need to copy on host3 by ansible. I am using RHEL.
I have following yml running on host2 but its getting stuck at copy file task.
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "RHEL: Restart sshd"
shell: /etc/init.d/sshd restart
sudo: yes
when:
- changed_sshd_config | changed
- name: Copy file from host2 to host3
shell: rsync -r -v -e ssh /root/samplefile.gz root#host3:/root
Can anyone explain me what is missing here. If you can, Please provide details steps mentioning correct hosts.
The problem most likely is you can not log in from host2 to host3 and Ansible is hanging at this task because rsync is waiting for you to enter the ssh password. Are you able to log in manually from host2 to host3?
I answered the same question before in How communicate two remote machine through ansible (can't flag as duplicate because no answer was accepted...)
The following is a slightly modified version of the linked answer. The tasks were written for CentOS, so it should work on RHEL.
For this to work, you will either need to have your private ssh key deployed to host2, or, preferable enable ssh agent forwarding, for example in your .ssh/config:
Host host2
ForwardAgent yes
Additionally sshd on host2 would need to accept agent forwarding. Here are some tasks which I use to do this:
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "Restart sshd"
shell: systemctl restart sshd.service
sudo: yes
when: changed_sshd_config | changed
You might need to configure this in a separate playbook. Because Ansible might keep an open ssh connection to the host and after activating agent forwarding you probably will need to re-connect.
I am running ansible playbook from system 1 which runs tasks on system 2 to take backup and after that, I want to copy backup file from system 2 to system 3.
I am doing this task for automating below command
where /bck1/test on system 2 and opt/backup on system 3
rsync -r -v -e ssh /bck1/test.* root#host3:/opt/backup
You can run the raw rsync command with the shell module.
tasks:
- shell: rsync -r -v -e ssh /bck1/test.* root#host3:/opt/backup
For this to work, you will either need to have your private ssh key deployed to system 2, or, preferable enable ssh agent forwarding, for example in your .ssh/config:
Host host2
ForwardAgent yes
Additionally sshd on system 2 would need to accept agent forwarding. Here are some tasks which I use to do this:
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "Debian: Restart sshd"
shell: invoke-rc.d ssh restart
sudo: yes
when:
- ansible_distribution in [ "Debian", "Ubuntu" ]
- changed_sshd_config | changed
- name: "CentOS 7: Restart sshd"
shell: systemctl restart sshd.service
sudo: yes
when:
- ansible_distribution == "CentOS"
- ansible_distribution_major_version == "7"
- changed_sshd_config | changed
There are two separate tasks for restarting sshd on Debian and CentOS7. Pick what you need or maybe you have to adapt that to your system.
You might need to configure this in a separate playbook. Because Ansible will keep an open ssh connection to the host and after activating agent forwarding you most probably will need to re-connect.
PS: It's not the best idea to allow ssh login for user root, but that is another topic. :)