Ansible Command, Shell modules not executing script - bash

I have a tomcat.sh with its intended purpose to restart (stop and start) the tomcat after it has been deployed onto the remote host.
I noticed that the Shell and Command modules is not executing the .sh file. However, I am able to execute the .sh file manually on the remote host as the the remote user.
The Playbook tasks are listed below:
Shell
- name: ensures tomcat is restarted
shell:
"nohup {{tomcat_dir}}/apache-tomcat-{{tomcat_version}}/tomcat.sh &"
Command
- name: ensures tomcat is restarted test-test
command: ./tomcat.sh
args:
chdir: "{{tomcat_dir}}/apache-tomcat-{{tomcat_version}}"

I was having the same problem. I have my simple script that is supposed to start a java program. shell and command module work very erratically. That is sometimes the java program is started successfully, and sometimes nothing happens. Even though ansible rc status shows as 0 (which is successful exit code ). I even put a echo "Hello" >> output.log as the first line in my script, to check if the script is actually picked for running. But, even that does not get printed.But, no errors whatsoever is printed and ansible module exit status (rc) is 0.Also, be sure to look at the stderr too. Sometimes, even though rc could be 0, but there might be some info in stderr
After lots of hair tearing, I could manage to fix my issue. I was running the java program as "sudo".I removed the sudo out of my script, and put it within playbook as the "become" directive - http://docs.ansible.com/ansible/become.html . This directive is available from ansible 2.0 onwards only, so I had to upgrade my ansible from 1.5 to 2.0. My playbook finally looked like this
- name: Execute run.sh
become: true
script: /home/vagrant/deploy/target/scripts/run.sh
The script looks like this:
nohup java -Djava.net.preferIPv4Stack=true -Xms1048m -Xmx1048m -cp /x/y/z/1.jar com.github.binitabharati.x.Main >> output.log 2>&1 &
Also, notice that I have not used the command or shell module, instead I used script module.

Does the value of {{tomcat_dir}} start with a "/"? If not then it will try to execute the command using the specified path relative to the home dir of whichever user ansible using to ssh in to the remote host.
incidentally, If you installed via package manager, this might be easier?
- name: restart tomcat
become: yes
become_user: sudo
service: name=tomcat state=restarted
More detail on the "service" module here
Of course, the service name may be different, like tomcat6 or tomcat7 or whatever, but you get the gist, I hope.
Lastly, if you installed tomcat via ansible using a role from galaxy or github, your role may have handlers that you could copy to get this done.

Related

Shell command works manually, not using Ansible

I'm playing with Ansible (still learning), but I encountered a problem I can't think of a solution.
I'm trying to install and launch Tomcat on a remote server using Ansible.
The installation is working, but the last step which is the activation of the Tomcat server is failing.
If I manually launch the startup.sh script (as su -), using the following command : bash /opt/tomcat/startup.sh, I can see the tomcat homepage.
Using the ansible playbook I wrote, even though Ansible doesn't show up any errors, I can't see the tomcat homepage.
Here is the task I'm running :
- name: Launch Tomcat
command: bash /opt/tomcat/startup.sh
become: true
I tried to add become_user: root and become_method: sudo with no success.
I think it may be related to how become: true is handled by ansible but I'm not sure.
Have you also tried using the shell-module instead of the command-module?
With the Command module the command will be executed without being proceeded through a shell. As a consequence some variables like $HOME are not available. And also stream operations like <, >, | and & will not work.
The Shell module runs a command through a shell, by default /bin/sh. This can be changed with the option executable. Piping and redirection are here therefor available.
(Source: https://blog.confirm.ch/ansible-modules-shell-vs-command/)
There might be a problem with the environment. "sudo su" is different from "su -" where
-, -l, --login Provide an environment similar to what the user would expect had the user logged in directly.
Try shell (because it allows pipes, redirection, logical operations, ...) without become: true
shell: su - && bash /opt/tomcat/startup.sh
Make sure remote_user is the same whom the su - command works fine for.
i had the same problem while working with startup.sh in ansible script.i got to know the tomcat server process got starts but immediately shutdown as well.
so the solution to the problem is running or starting the tomcat server in nohup
thru ansible
Here is the sample script.
cat start.yml
---
- name: Playbook to stop server
#hosts: localhost
hosts: webserver
tasks:
- name: Start the server tomcat from UI
shell:
nohup /home/tomcat/bin/catalina.sh start >> /home/tomcat/somelog

Run an shell script on startup (not login) on Ubuntu 14.04

I have a build server. I'm using the Azure Build Agent script. It's a shell script that will run continuously while the server is up. Problem is that I cannot seem to get it to run on startup. I've tried /etc/init.d and /etc/rc.local and the agent is not being run. Nothing concerning the build agent in the boot logs.
For /etc/init.d I created the script agent.sh which contains:
#!/bin/bash
sh ~/agent/run.sh
Gave it the proper permissions chmod 755 agent.shand moved it to /etc/init.d.
and for /etc/rc.local, I just appended the following
sh ~/agent/run.sh &
before exit 0.
What am I doing wrong?
EDIT: added examples.
EDIT 2: Just noticed that the init.d README says that shell scripts need to start with #!/bin/sh and not #!/bin/bash. Also used absolute path, but no change.
FINAL EDIT: As #ewrammer suggested, I used cron and it worked. crontab -e and then #reboot /home/user/agent/run.sh.
It is hard to see what is wrong if you are not posting what you have done, but why not add it as a cron job with #reboot as pattern? Then cron will run the script every time the computer starts.
Just in case, using a supervisor could be a good idea, In Ubuntu 14 you don't have systemd but you can choose from others https://en.wikipedia.org/wiki/Process_supervision.
If using immortal, after installing it, you just need to create a run.yml file in /etc/immortal with something like:
cmd: /path/to/command
log:
file: /var/log/command.log
This will start your script/command on every start, besides ensuring your script/app is always up and running.

Stuck on debugging an ansible task running remote command that freezes

I'm setting up an Ansible role to install Ahsay Offsite Backup Server.
After downloading and extracting the compressed file containing the software, I need to run the install script. I've determined that it's a step early in the script where it checks that your current user has appropriate permissions which is failing to run.
When I run the playbook, the final task never finishes.
The role
- name: Check if OBS install files have already been downloaded
stat:
path: /tmp/obs/version.txt
register: stat_result
- name: Ensures /tmp/obs exists
file: path=/tmp/obs state=directory
- name: Download and extract OBS install files
unarchive:
src: https://ahsay-dn.ahsay.com/v6/obsr/62900/obsr-nix.tar.gz
dest: /tmp/obs
remote_src: true
validate_certs: no
when: stat_result.stat.exists == false
- name: Install OBS
command: bash -lc "/tmp/obs/bin/install.sh > /tmp/install_output.log"
The playbook configuration is for all tasks to become sudo.
If I run the command in a shell on the remote host, it executes successfully.
I've hit similar issues before where commands fail because (in the case of rvm) it requires the bash_profile to load and pull in a bunch of environment variables first. The fix for that was as I've done above, to wrap the command in bash -lc "...", but that hasn't helped this time.
I'd love any suggestions of how I could continue troubleshooting this one.
you are checking for file presence before ensuring the folder.
some applications require tty, and when not on it they ask some stupid question
to really debug while the command is "stuck" connect to the offending machine, and try analyzing what does the script do: look in its /proc/${PID} folder (if you're on linux), maybe connect to it via strace -p ${PID} and maybe dup its stderr to see maybe it prints something that makes sense to you.
Also, you don't really have to run command, you can use shell module, and specify its args to make sure the command runs from specific folder, like so:
- name: Install OBS
shell: |
./bin/install.sh \
1> /tmp/install.output.log \
2> /tmp/install.error.log
args:
executable: /bin/bash
chdir: /tmp/obs

Why can't I execute systemctl commands as superuser?

I wrote a script to download and install kubernetes on an ubuntu machine.
The last part of the script would be to start the kubelet service.
echo "Initializing the master node"
kubeadm reset
systemctl start kubelet.service
kubeadm init
I am forcing the user to run the script as root user. However, when the script reaches the systemctl command, it is not able to execute it. Moreover, I tried to execute the command manually as the root user. I was not able to do so. However, I am able to execute it as a regular user.
Does anyone know why? Is there a workaround?
A possible workaround is to start the service as a regular user, even though the script runs as root. First, you need to find out who is the "original" user:
originalUser="$(logname 2>/dev/null)"
and then call the service as this user:
su - "$originalUser" -c "systemctl start kubelet.service"
Maybe that specific service is dependent on being run by an user who is not root (some programs test for that).

How to determine whether a script has previously run using Ansible?

I'm using Ansible to deploy (Git clone, run the install script) a framework to a server. The install step means running the install.sh script like this:
- name: Install Foo Framework
shell: ./install.sh
args:
chdir: ~/foo
How can I determine whether I have executed this step in a previous run of Ansible? I want to add a when condition to this step that only executes if the install.sh script hasn't been run previously.
The install.sh script does a couple of things (replacing some files in the user's home directory), but it's not obvious whether the script was run before from just taking a look at the files. The ~/foo.sh file might have existed before, it's not clear whether it was replaced by the install script or was there before.
Is there a way in Ansible to store a value on the server that let's me determine whether this particular task has been executed before? Or should I just create a marker file in the user's home directory (e.g. ~/foo-installed) that I check in later invocations of the playbook?
I suggest to use the script module instead. This module has a creates parameter:
a filename, when it already exists, this step will not be run. (added in Ansible 1.5)
So your script then could simply touch a file which would prevent execution of the script in subsequent calls.
Here's how I solved it in the end. The pointer to using the creates option helped:
- name: Install Foo Framework
shell: ./install.sh && touch ~/foo_installed
args:
chdir: ~/foo
creates: ~/foo_installed
Using this approach, the ~/foo_installed file is only created when the install script finished without an error.

Resources