Playbook 'hangs' when running a Python script that launches another Windows process - ansible

I'm having an issue when running a Python script using the win_command module. The playbook execution hangs indefinitely when the Python script starts a Tomcat process which is supposed to keep running even once the script completes. If I were to manually kill the Tomcat process, the Ansible playbook completes.
---
- name: Restore product
win_command: 'python restore-product.py'
args:
chdir: C:\temp
I have tried the following within the Python script hoping that Ansible would not be able to track the launched process, but have had no luck:
subprocess.Popen('start /cmd /c service.bat startup', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sys.exit(0)

Do not use Python's popen function and rather go for spawn() as described here:
https://stackoverflow.com/a/1196122/4222206

Related

Force stop ansible playbook

Is it possible to stop an ansible playbook?
I am not talking about failed_when condition or some other ansible modules, but actually killing the process!
From within the playbook, you could use:
assert or fail to fail a task on a specific condition
all_errors_fatal to force the entire playbook to stop when a task fails
From the outside world (Linux shell), you could use CTRL-C while the playbook is running on your terminal.
Or, from another terminal, you could get the PID of the ansible-playbook command (ps aux | grep ansible-playbook, for example), and kill that PID.

How to run program on ( ubuntu bash windows 10 ) from windows task scheduler

I need to execute task every 5 minute in ubuntu bash and I like to use windows task scheduler.
I don't know how to write a .bat file to start application in ubuntu bash.
I tested these and did not work.
c:\Windows\System32\bash.exe -l [program_name args]
can run command on ubuntu bash with add -c flag in args
c:\Windows\System32\bash.exe -c <command>
and write it in .bat file and then add to windows task scheduler.

Daemonizing an executable in ansible

I am trying to create a task in ansible which executes a shell command to run an executable in daemon mode using &. Something like following
-name: Start daemon
shell: myexeprogram arg1 arg2 &
What am seeing is if I keep & the task returns immediately and the process is not started . If I remove & ansible task waits for quite some time without returning.
Appreciate suggestion on proper way to start program in daemon mode through ansible. Pls note that I dont want to run this as a service but an adhoc background process based on certain conditions.
Running program with '&' does not make program a daemon, it just runs in background. To make a "true daemon" your program should do steps described here.
If your program is written in C, you can call daemon() function, which will do it for you. Then you can start your program even without '&' at the end and it will be running as a daemon.
The other option is to call your program using daemon, which should do the job as well.
- name: Start daemon
shell: daemon -- myexeprogram arg1 arg2
When you (or Ansible) log out the exit signal will still be sent to the running process, even though it is running in the background.
You can use nohup to circumvent that.
- name: Start daemon
shell: nohup myexeprogram arg1 arg2 &
http://en.wikipedia.org/wiki/Nohup
From the brief description on what you want to achieve, it sounds like it would be best for you to set up your executable as a service (using Upstart or similar) and then start/stop it as needed based on the other conditions that require it to be running (or not running).
Trying to run this as a process otherwise will entail having to capture the PID or similar so you can try and shut down the daemon you have started when you need to, with pretty much the same amount of complexity as installing an init config file would take and without the niceties that systems such as Upstart give you with the controls such as start/stop.
I found the best way, particularly because I wanted output to be logged, was to use the "daemonize" package. If you are on CentOS/Redhat, like below. There is probably also an apt-package for it.
- name: yum install daemonize
yum:
name: daemonize
state: latest
- name: run in background, log errors and standout to file
shell: daemonize -e /var/log/myprocess.log -o /var/log/myprocess.log /opt/myscripts/myprocess.sh
Adding to the daemonize suggestions above, if you want to start your program in a specific directory you can do:
- name: install daemonize package
package:
name: daemonize
state: latest
- name: start program
command: daemonize -c /folder/to/run/in /path/to/myexeprogram arg1 arg2
Notably, you also probably want the -e -o flags to log output.

Chef - Run long running script in background

I want to run a simple script in the background. It needs to be alive for the duration of the entire life of the machine.
script "my_script" do
interpreter "ruby"
cwd "/home/my_home"
user "root"
code << -EOH
pid = fork
if pid
Process.detach(pid)
system("ruby the_actual_script.rb > logfile")
end
EOH
But this does not seem to run, it appears it has run and exited immediately. There is a 0 size logfile. I have the cwd folder set to 777 permission.
Can't figure out what the issue is. I am guessing chef executes this in a different shell and gets rid of all processes once it exits that shell?
Is there a better way to simply run the script in the background?
What you describe is called a "service". You can place your script in his own file, for example using the "cookbook_file" chef resource. Then write an init script for it, for example using upstart in Ubuntu systems. Once you have an init script, you can use chef's "service" resource to make sure the service is enabled to always run, and that it is started during the first chef run that creates it. Voila!

Running remotely Linux script from Windows and get execution result code

I have the current scenario to deal with:
I have to schedule the backup of my company's Linux-based server (under Suse Linux) with ARCServe R15 (installed on Windows 2003R2SP2).
I know I have the ability in my backup software (ARCServe) to add pre/post execution scripts to my backup-jobs.
If failure of the script, ARCServe would be specified NOT to run the backup-job, and if success, specified to be run. I have no problem with this.
The problem is, I want to make a windows script (to be launched by ARCServe) for executing a Linux script on the cluster:
- If this Linux-script fails, I want my windows-script to fail, so my backup job in ARCServe wouldn't run
- If the Linux-script success, I want my windows-script to end normally with error code 0, so my ARCServe job would run normally.
I've tried creating this batch file (let's call it HPC.bat):
echo ON
start /wait "C:\Program Files\PUTTY\plink.exe" -v -l root -i "C:\IST\admin\scripts\HPC\pri.ppk" [cluster_name] /appli/admin/backup_admin
exit %errorlevel%
If I manually launch this .bat by double-clicking on it, or launching it in a command prompt under Windows, it executes normally and then ends.
If I make it being launched by ARCServe, the script seems never to end.
My job stays in "waiting" status, it seems the execution code of the linux script isn't returned to my batch file, and this one doesn't close.
In my mind, what's happening is plink just opens the connection to the Linux, send the sript execution signal, and then close the connection, so the execution code can't be returned to the batch. Am I right ?
Is what I want to do possible or am I trying something impossible to do ?
So, do I have to proceed differently ?
Do I have to use PUTTY or CygWin instead of plink ?
Please, it's giving me headaches ...
If you install Cygwin, you could do it exactly like you can do it on Linux to Linux, i.e. remotely run a command with ssh someuser#remoteserver.com somecommand
This command will return with the same return code on the calling client, as the command exited with on the remote end. If you use SSH shared keys for authentication instead of passwords, it can also be scripted without user interaction.

Resources