Running remotely Linux script from Windows and get execution result code - windows

I have the current scenario to deal with:
I have to schedule the backup of my company's Linux-based server (under Suse Linux) with ARCServe R15 (installed on Windows 2003R2SP2).
I know I have the ability in my backup software (ARCServe) to add pre/post execution scripts to my backup-jobs.
If failure of the script, ARCServe would be specified NOT to run the backup-job, and if success, specified to be run. I have no problem with this.
The problem is, I want to make a windows script (to be launched by ARCServe) for executing a Linux script on the cluster:
- If this Linux-script fails, I want my windows-script to fail, so my backup job in ARCServe wouldn't run
- If the Linux-script success, I want my windows-script to end normally with error code 0, so my ARCServe job would run normally.
I've tried creating this batch file (let's call it HPC.bat):
echo ON
start /wait "C:\Program Files\PUTTY\plink.exe" -v -l root -i "C:\IST\admin\scripts\HPC\pri.ppk" [cluster_name] /appli/admin/backup_admin
exit %errorlevel%
If I manually launch this .bat by double-clicking on it, or launching it in a command prompt under Windows, it executes normally and then ends.
If I make it being launched by ARCServe, the script seems never to end.
My job stays in "waiting" status, it seems the execution code of the linux script isn't returned to my batch file, and this one doesn't close.
In my mind, what's happening is plink just opens the connection to the Linux, send the sript execution signal, and then close the connection, so the execution code can't be returned to the batch. Am I right ?
Is what I want to do possible or am I trying something impossible to do ?
So, do I have to proceed differently ?
Do I have to use PUTTY or CygWin instead of plink ?
Please, it's giving me headaches ...

If you install Cygwin, you could do it exactly like you can do it on Linux to Linux, i.e. remotely run a command with ssh someuser#remoteserver.com somecommand
This command will return with the same return code on the calling client, as the command exited with on the remote end. If you use SSH shared keys for authentication instead of passwords, it can also be scripted without user interaction.

Related

Is there a way to get my laptop to beep from within a bash script running on a remote server via SSH?

I have a bash script that I have to regularly run on a remote server. Part of the script includes running a backup which takes a while, and after it has run, I have to hit "Y" to confirm that the backup worked before the script will continue.
I would like to know if there is a way to get my laptop to make a beep (or some sort of sound) when that happens. I know that echo -e '\a' makes a beep, but if I run it from within a script on the remote server, the beep happens on the remote server.
I have control of the script that is being run, so I could easily change it to do something special.
You could send the command through ssh back to your computer like:
ssh user#host "echo -e '\a'"
Just make sure you have ssh key authentication from your server to your computer so the command can run smoothly
In my case the offered solutions with echo didn't work. I'm using a macbook and connect to an ubuntu system. I keep the terminal open and I'd like to be informed when a long running bash script is ready.
What I did notice is that if I shutdown the remote system then it will beep the macbook and show an alarm icon on the relevant tab. So I have now implemented a bit of dirty workaround:
sudo shutdown 1440 && shutdown -c
This will initiate the system to shutdown and will immediately cancel the request. And I do get the alarm beep + icon. You will need to setup sudo to allow the user to permit shutdown. As it was my own remote server it was no problem but could limit the usability for others.

running windows commands remotely

I'm trying to run a command remotely.
Here is what I've tried
wmic /node:"my_server" /user:my_username /password:my_pass process call create "cmd.exe \c dir C:>C:\temp\x.txt"
I can see the process id returned and I see a terminal running on the remote machine with that process id and that process is just stuck and the output x.txt is not generated.
Any idea how to make it work?
Any idea why the process is running but not doing anything?
My goal is to get the output back so it is not necessary to write to a file.

Psexec failing when running multiple commands in sequel

Using windows task scheduler i am running multiple commands, I'll call them task1.bat, task2.bat, and task3.bat . Each one of these scrips runs a different Psexec command (psexec version 2.11).
When running task1.bat, task2.bat, and task3.bat indivdually, these scripts run successfully; however when run in succession, task1.bat will run successfully, then task2.bat and task3.bat will usually fail with the error "Couldnt access servername. Access is denied. The syntax of the command is incorrect".
It seems like an error with Psexec, since when run individually the commands works fine. Is there a way to force Psexec to exit/end before moving onto the next script (besides just putting in a timeout)? It seems like psexec is hung which is causing the next to fail.
The .bat script will run sequentially if you create and run the batch file:
CALL task1.bat
CALL task2.bat
CALL task3.bat

Long running scripts in Windows command line

I am running a script on Windows command line that takes multiple hours to finish executing. During this time, I am required to keep my computer open or the script stops. I was wondering if there are any tools that I can use which would keep the script running even if I put my computer to sleep (or shut the computer down). Thanks!
If computer is put to sleep or shut down, programs cannot run on it by definition of these states. Possible workarounds might include:
Running script on a permanently running remote machine (i.e. server)
Preventing computer to go to sleep

Starting a Linux PSOCK cluster from a Windows machine hangs R

I'm trying to setup a cluster on a Linux box using the parallel package. A wart is that the machine I'm using as the master is running Windows as opposed to CentOS.
After some hacking around with puttygen and plink (putty's version of ssh) I got a command string that manages to execute Rscript on (a) slave, without needing a password:
plink -i d:/hong/documents/gpadmin.ppk -l gpadmin 192.168.224.128 Rscript
where gpadmin.ppk is a private key file generated using puttygen, and copied to the slave.
I translated this into a makeCluster call, as follows:
cl <- makeCluster("192.168.224.128",
user="gpadmin",
rshcmd="plink -i d:/hong/documents/gpadmin.ppk",
master="192.168.224.1",
rscript="Rscript")
but when I try to run this, R (on Windows) hangs. Well, it doesn't hang as in crashing, but it doesn't do anything until I press Escape.
However, I can laboriously get the cluster running by adding manual=TRUE to the end of the call:
cl <- makeCluster("192.168.224.128",
user="gpadmin",
rshcmd="plink -i d:/hong/documents/gpadmin.ppk",
master="192.168.224.1",
rscript="Rscript",
manual=TRUE)
I then log into the slave using the above plink command, and, at the resulting bash prompt, running the string that R displayed. This suggests that the string is fine, but makeCluster is getting confused trying to run it by itself.
Can anyone help diagnose what's going on, and how to fix it? I'd rather not have to start the cluster by manually logging into 16+ nodes every time.
I'm running R 3.0.2 on Windows 7 on the master, and R 3.0.0 on CentOS on the slave.
Your method of creating the cluster seems correct. Using your instructions, I was able to start a PSOCK cluster on a Linux machine from a Windows machine.
My first thought was that it was a quoting problem, but that doesn't seem to be the case since the Rscript command worked for you in manual mode. My second thought was that your environment is not correctly initialized when running non-interactively. For instance, you'd have a problem if Rscript was only in your PATH when running interactively, but again, that doesn't seem to be the case, since you were able to execute Rscript via plink. Have you checked if you have anything in ~/.Rprofile that only works interactively? You might want to temporarily remove any ~/.Rprofile on the Linux machine to see if that helps.
You should use outfile="" in case the worker issues any error or warning messages. You should run "ps" on the Linux machine while makeCluster is hanging to see if the worker has exited or is hanging. If it is running, then that suggests a networking problem that only happens when running non-interactively, strange as that seems.
Some additional comments:
Use Rterm.exe on the master so you see any worker output when using outfile="".
I recommend using "Pageant" so that you don't need to use an unencrypted private key. That's safer and avoids the need for the plink "-i" option.
It's a good idea to use the same version of R on the master and workers.
If you're desperate, you could write a wrapper script for Rscript on the Linux machine that executes Rscript via strace. That would tell you what system calls were executed when the worker either exited or hung.

Resources