Long running scripts in Windows command line - windows

I am running a script on Windows command line that takes multiple hours to finish executing. During this time, I am required to keep my computer open or the script stops. I was wondering if there are any tools that I can use which would keep the script running even if I put my computer to sleep (or shut the computer down). Thanks!

If computer is put to sleep or shut down, programs cannot run on it by definition of these states. Possible workarounds might include:
Running script on a permanently running remote machine (i.e. server)
Preventing computer to go to sleep

Related

Is there a way to get my laptop to beep from within a bash script running on a remote server via SSH?

I have a bash script that I have to regularly run on a remote server. Part of the script includes running a backup which takes a while, and after it has run, I have to hit "Y" to confirm that the backup worked before the script will continue.
I would like to know if there is a way to get my laptop to make a beep (or some sort of sound) when that happens. I know that echo -e '\a' makes a beep, but if I run it from within a script on the remote server, the beep happens on the remote server.
I have control of the script that is being run, so I could easily change it to do something special.
You could send the command through ssh back to your computer like:
ssh user#host "echo -e '\a'"
Just make sure you have ssh key authentication from your server to your computer so the command can run smoothly
In my case the offered solutions with echo didn't work. I'm using a macbook and connect to an ubuntu system. I keep the terminal open and I'd like to be informed when a long running bash script is ready.
What I did notice is that if I shutdown the remote system then it will beep the macbook and show an alarm icon on the relevant tab. So I have now implemented a bit of dirty workaround:
sudo shutdown 1440 && shutdown -c
This will initiate the system to shutdown and will immediately cancel the request. And I do get the alarm beep + icon. You will need to setup sudo to allow the user to permit shutdown. As it was my own remote server it was no problem but could limit the usability for others.

Bash: Script output to terminal session stops but script finishes normal

I'm opening an ssh-session to a remote server and execute a larger (around 1000 lines) bash-script on the remote machine. It involves several very CPU-intensive calls which run for up to three minutes each. To track the scripts progress it echoes messages placed at several points in the script.
In general the script runs smoothly. From time to time the script runs trough (the resulting file on the remote machine is correct) but the output to the terminal stops. Ctrl-C doesn't help, no prompt, just a frozen session. top in a separate session shows normal execution of the script.
My question: How keep the session alive?
local machine:
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.9
BuildVersion: 13A603
remote machine:
$ lsb_release -d
Description: Ubuntu 12.04.3 LTS
Personally, I would recommend using screen or tmux on the remote terminal for exactly this reason.
Those apps will allow the remote process to continue even if your local SSH session times out.
http://www.bangmoney.org/presentations/screen.html
http://tmux.sourceforge.net/
Start a screen on the remote machine and run your command from it:
screen -S largeScript
And then
./yourLargeScript.sh
Whenever your ssh session gets frozen, you can kill it with ~.
If you ssh again, you can grab back your screen by:
screen -dr largeScript
Make it log to a file instead (perhaps via syslog), and tail that file from wherever is convenient for you. This also helps detach the script so you can run it headless, from a cron job, etc. Also, if the log file has read access for others, they too can monitor it.

Running remotely Linux script from Windows and get execution result code

I have the current scenario to deal with:
I have to schedule the backup of my company's Linux-based server (under Suse Linux) with ARCServe R15 (installed on Windows 2003R2SP2).
I know I have the ability in my backup software (ARCServe) to add pre/post execution scripts to my backup-jobs.
If failure of the script, ARCServe would be specified NOT to run the backup-job, and if success, specified to be run. I have no problem with this.
The problem is, I want to make a windows script (to be launched by ARCServe) for executing a Linux script on the cluster:
- If this Linux-script fails, I want my windows-script to fail, so my backup job in ARCServe wouldn't run
- If the Linux-script success, I want my windows-script to end normally with error code 0, so my ARCServe job would run normally.
I've tried creating this batch file (let's call it HPC.bat):
echo ON
start /wait "C:\Program Files\PUTTY\plink.exe" -v -l root -i "C:\IST\admin\scripts\HPC\pri.ppk" [cluster_name] /appli/admin/backup_admin
exit %errorlevel%
If I manually launch this .bat by double-clicking on it, or launching it in a command prompt under Windows, it executes normally and then ends.
If I make it being launched by ARCServe, the script seems never to end.
My job stays in "waiting" status, it seems the execution code of the linux script isn't returned to my batch file, and this one doesn't close.
In my mind, what's happening is plink just opens the connection to the Linux, send the sript execution signal, and then close the connection, so the execution code can't be returned to the batch. Am I right ?
Is what I want to do possible or am I trying something impossible to do ?
So, do I have to proceed differently ?
Do I have to use PUTTY or CygWin instead of plink ?
Please, it's giving me headaches ...
If you install Cygwin, you could do it exactly like you can do it on Linux to Linux, i.e. remotely run a command with ssh someuser#remoteserver.com somecommand
This command will return with the same return code on the calling client, as the command exited with on the remote end. If you use SSH shared keys for authentication instead of passwords, it can also be scripted without user interaction.

Run perl script while computer is locked

I have written a perl script which runs in the background right after a user logs in on a window xp machine. Now the script runs fine until the user is working on the computer. But if he locks the computer or the screen saver is started (which also locks the pc soon) then the script stops working. So how can I get the script running even if the Pc is in locked mode ???
Locking a computer will not stop a regular perl script, you have to go out of your way to create a script to notice when a computer is locked and stop running
Try it, run this
perl -le " for( 1 .. 20 ) { print scalar gmtime; sleep 1; } "
lock the computer, wait a few seconds, unlock it, and you'll see it kept running
Just a thought... you might have to run the perl script as a windows service to keep it running (kind of emulating Unix's nohup).

Can a standalone ruby script (windows and mac) reload and restart itself?

I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart.
That's where I hit a roadblock. If I were using any sane platform, I could just do:
exec('ruby', __FILE__)
...and be done. However, I did the following test:
p Process.pid
sleep 1
exec('ruby', __FILE__)
...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along.
The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall.
So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.
After trying a number of solutions (including the one submitted by Byron Whitlock, which ultimately put me onto the path to a satisfactory end) I settled upon:
IO.popen("start cmd /C ruby.exe #{$0} #{ARGV.join(' ')}")
sleep 5
I found that if I didn't sleep at all after the popen, and just exited, the spawn would frequently (>50% of the time) fail. This is not cross-platform obviously, so in order to have the same behavior on the mac:
IO.popen("xterm -e \"ruby blah blah blah\"&")
The classic way to restart a program is to write another one that does it for you. so you spawn a process to restart.exe <args>, then die or exit; restart.exe waits until the calling script is no longer running, then starts the script again.

Resources