I'm writing an Ansible playbook to manage backup and I want two different tasks:
- name: Setup local machine for backup
cron:
cron_file: /etc/cron.d/backup
hour: 4
minute: 0
job: /root/do_backup.sh
state: present
name: backup
- name: Setup backup server for new machine
shell:
cmd: "mkdir /backups/{{inventory_hostname}}"
Is it possible to tell ansible that second task is intended to be executed on another machine of my inventory?
I don't want a dedicated playbook, because some later tasks should be executed after the task on backup server.
I'm answering my own question: task delegation is what I'm looking for:
- name: Setup backup server for new machine
delegate_to: backup-server
shell:
cmd: "mkdir /backups/{{inventory_hostname}}"
Related
I am trying to subsequently run a task after I am connected using ssh. I am connecting using this in my playbook
- name: connect using password # task 1; this task set/connect me as root
expect:
command: ssh -o "StrictHostKeyChecking=no" myuser#********
responses:
"password:":
-my password
-my password
delegate_to: localhost
That task is fine and I am able to see that I am connected. The problem now is that when I try to run subsequent tasks for example:
- name: copy folder # task 2 in the same playbook
copy:
src: "files/mylocalfile.txt"
dest: "etc/temp"
mode: "0777"
I have the following message:
"msg: etc/temp not writable"
How do I do to continue executing the remaining task as root that got connected in task1?
I believe this might not be an ansible question, but a linux one.
Is your user in /etc/wheel?
Ansible has the direective become, which will let you execute a task as root, if the user you are connecting with is allowed to escalate privileges. The task you want to run with privileges would be something like:
- name: copy folder # task 2 in the same playbook
become: yes
copy:
src: "files/mylocalfile.txt"
dest: "etc/temp"
mode: "0777"
you can use become_user if you need to specify the user you want to run the task as, and if you have a password for the privileged user, you can ask ansible to prompt for the password when running ansible-playbook, using --ask-become-password.
The following link offers documentation about privilege escalation in ansible:
https://docs.ansible.com/ansible/latest/user_guide/become.html
I am writing a playbook for my Application tomcat node It will copy, deploy and stop/start tomcats.
I have a hop box serverA, another hop box serverB and tomcat node tomcatC. Manually using putty i use below steps to get on to the tomcat
Login to serverA using userId1
ssh to serverB using userId2
ssh to tomcatC using userId1
sudo to tomcat user.
Also I am able to directly ssh to tomcatC from serverA and my Ansible master is also serverA from where I am running the playbooks.
How do i run my playbook for this? Below is my playbook i am using as of now but it's not working.
ansible-playbook -i my-inventory my-V3.yml --tags=download,copy,deploy -e release_version=5.7 -e target_env=tomcatC -u userId1 --ask-pass. AND my-v3.yml looks like below -
hosts: '{{ target_env }}'
#serial: 1
remote_user: userId1
become: yes
become_user: tomcat
Getting this Error NOW -
GATHERING FACTS ***************************************************************
fatal: [tomcatC] => Missing become password
You can set the user a command is run as like so:
- cron:
name: "Clear Root Mail"
minute: "0"
hour: "22"
job: "sudo rm /var/spool/mail/root"
user: myuser
Or use become: true like so:
- name: Start Server
shell: "nohup /home/myuser/StartServer.sh &"
become: true
You can have shell scripts run the commands you need to run as well that ansible calls from the jump box you have. Your problem looks like you dont have the correct ssh key applied though.
Having hit a brick wall with troubleshooting why one shell script is hanging when I'm trying to run it via Ansible on the remote host, I've discovered that if I run it in an ssh session from the ansible host it executes successfully.
I now want to build that into a playbook as follows:
- name: Run script
local_action: shell ssh $TARGET "/home/ansibler/script.sh"
I just need to know how to access the $TARGET that this playbook is running on from the selected/limited inventory so I can concatenate it into that local_action.
Is there an easy way to access that?
Try with ansible_host:
- name: Run script
local_action: 'shell ssh {{ ansible_host }} "/home/ansibler/script.sh"'
I am running ansible playbook from system 1 which runs tasks on system 2 to take backup and after that, I want to copy backup file from system 2 to system 3.
I am doing this task for automating below command
where /bck1/test on system 2 and opt/backup on system 3
rsync -r -v -e ssh /bck1/test.* root#host3:/opt/backup
You can run the raw rsync command with the shell module.
tasks:
- shell: rsync -r -v -e ssh /bck1/test.* root#host3:/opt/backup
For this to work, you will either need to have your private ssh key deployed to system 2, or, preferable enable ssh agent forwarding, for example in your .ssh/config:
Host host2
ForwardAgent yes
Additionally sshd on system 2 would need to accept agent forwarding. Here are some tasks which I use to do this:
- name: Ensure sshd allows agent forwarding
lineinfile: dest=/etc/ssh/sshd_config
regexp=^#?AllowAgentForwarding
line="AllowAgentForwarding yes"
follow=yes
backup=yes
sudo: yes
register: changed_sshd_config
- name: "Debian: Restart sshd"
shell: invoke-rc.d ssh restart
sudo: yes
when:
- ansible_distribution in [ "Debian", "Ubuntu" ]
- changed_sshd_config | changed
- name: "CentOS 7: Restart sshd"
shell: systemctl restart sshd.service
sudo: yes
when:
- ansible_distribution == "CentOS"
- ansible_distribution_major_version == "7"
- changed_sshd_config | changed
There are two separate tasks for restarting sshd on Debian and CentOS7. Pick what you need or maybe you have to adapt that to your system.
You might need to configure this in a separate playbook. Because Ansible will keep an open ssh connection to the host and after activating agent forwarding you most probably will need to re-connect.
PS: It's not the best idea to allow ssh login for user root, but that is another topic. :)
I'm new to Ansible. I'm trying to start a process on a remote host using a very simple Ansible Playbook.
Here is how my playbook looks like
-
hosts: somehost
gather_facts: no
user: ubuntu
tasks:
- name: change directory and run jetty server
shell: cd /home/ubuntu/code; nohup ./run.sh
async: 45
run.sh calls a java server process with a few parameters.
My understanding was that using async my process on the remote machine would continue to run even after the playbook has completed (which should happen after around 45 seconds.)
However, as soon as my playbook exits the process started by run.sh on the remote host terminals as well.
Can anyone explain what's going and what am I missing here.
Thanks.
I have ansible playbook to deploy my Play application. I use the shell's command substitution to achieve this and it does the trick for me. I think this is because command substitution spawns a new sub-shell instance to execute the command.
-
hosts: somehost
gather_facts: no
user: ubuntu
tasks:
- name: change directory and run jetty server
shell: dummy=$(nohup /run.sh &) chdir={{/home/ubuntu/code}}
Give a longer time to async say 6 months or an year or evenmore and this should be fine.
Or convert this process to an initscript and use the service module.
and add poll: 0
I'd concur. Since it's long running, I'd call it a service and run it like so. Just create an init.d script, push that out with a 'copy' then run the service.