How to start WSL cron jobs on boot? - windows

I have some scripts that must be run under WSL, and must be run all the time. Currently, my Windows 11 randomly decides when it thinks it's convenient to reboot and install some updates. Is there a way to start WSL cron jobs automatically with Windows?
If this sounds like an XY Problem, I'd be more than happy to elaborate further.
I set up the cron job via crontab -e (and also sudo). I was expecting it to behave as it would on a regular Linux distro, but it doesn't do anything until I "sudo service cron start" and have at least one of the WSL windows open.

Related

Running python script in parallel with Ansible

I am managing 6 machines, or more, at AWS with Ansible. Those machines must run a python script that runs forever (the script has a while True).
I call the python script via command: python3 script.py
But just 5 machines run the script, the others doesn't. I can't figure out what I am doing wrong.
(Before the script call everything works fine for all machines like echo, ping, etc)
I already found the awnser.
The fork in ansible restrict to 5 machines as default. You must add an fork to the configuration file with a greater number, but your machine with Ansible must have power to manage that.
I'll let the question because to me was pretty hard to find the awnser.

Programmatically schedule script execution with launchd or crontab

I know how to create a configuration to schedule eg. a daily execution of a script with launchd or crontab on MacOS. However, I run into a scenario where I need to schedule the one-time execution of a script as part of a(nother) ruby script.
The hacky solution to that would be to manually write a plist file, and then running launchctl load, however that requires sudo privileges.
Is there a better way of programmatically scheduling the one-time execution of a script in MacOS?
I would use the at command. I haven't used it on mac os, but I would bet you can do brew install at then you can run the at command to schedule a job at a specific time.
echo script.sh | at tomorrow noon
https://linux.die.net/man/1/at

Long running scripts in Windows command line

I am running a script on Windows command line that takes multiple hours to finish executing. During this time, I am required to keep my computer open or the script stops. I was wondering if there are any tools that I can use which would keep the script running even if I put my computer to sleep (or shut the computer down). Thanks!
If computer is put to sleep or shut down, programs cannot run on it by definition of these states. Possible workarounds might include:
Running script on a permanently running remote machine (i.e. server)
Preventing computer to go to sleep

Starting a Linux PSOCK cluster from a Windows machine hangs R

I'm trying to setup a cluster on a Linux box using the parallel package. A wart is that the machine I'm using as the master is running Windows as opposed to CentOS.
After some hacking around with puttygen and plink (putty's version of ssh) I got a command string that manages to execute Rscript on (a) slave, without needing a password:
plink -i d:/hong/documents/gpadmin.ppk -l gpadmin 192.168.224.128 Rscript
where gpadmin.ppk is a private key file generated using puttygen, and copied to the slave.
I translated this into a makeCluster call, as follows:
cl <- makeCluster("192.168.224.128",
user="gpadmin",
rshcmd="plink -i d:/hong/documents/gpadmin.ppk",
master="192.168.224.1",
rscript="Rscript")
but when I try to run this, R (on Windows) hangs. Well, it doesn't hang as in crashing, but it doesn't do anything until I press Escape.
However, I can laboriously get the cluster running by adding manual=TRUE to the end of the call:
cl <- makeCluster("192.168.224.128",
user="gpadmin",
rshcmd="plink -i d:/hong/documents/gpadmin.ppk",
master="192.168.224.1",
rscript="Rscript",
manual=TRUE)
I then log into the slave using the above plink command, and, at the resulting bash prompt, running the string that R displayed. This suggests that the string is fine, but makeCluster is getting confused trying to run it by itself.
Can anyone help diagnose what's going on, and how to fix it? I'd rather not have to start the cluster by manually logging into 16+ nodes every time.
I'm running R 3.0.2 on Windows 7 on the master, and R 3.0.0 on CentOS on the slave.
Your method of creating the cluster seems correct. Using your instructions, I was able to start a PSOCK cluster on a Linux machine from a Windows machine.
My first thought was that it was a quoting problem, but that doesn't seem to be the case since the Rscript command worked for you in manual mode. My second thought was that your environment is not correctly initialized when running non-interactively. For instance, you'd have a problem if Rscript was only in your PATH when running interactively, but again, that doesn't seem to be the case, since you were able to execute Rscript via plink. Have you checked if you have anything in ~/.Rprofile that only works interactively? You might want to temporarily remove any ~/.Rprofile on the Linux machine to see if that helps.
You should use outfile="" in case the worker issues any error or warning messages. You should run "ps" on the Linux machine while makeCluster is hanging to see if the worker has exited or is hanging. If it is running, then that suggests a networking problem that only happens when running non-interactively, strange as that seems.
Some additional comments:
Use Rterm.exe on the master so you see any worker output when using outfile="".
I recommend using "Pageant" so that you don't need to use an unencrypted private key. That's safer and avoids the need for the plink "-i" option.
It's a good idea to use the same version of R on the master and workers.
If you're desperate, you could write a wrapper script for Rscript on the Linux machine that executes Rscript via strace. That would tell you what system calls were executed when the worker either exited or hung.

How can i run two commands exactly at same time on two different unix servers?

My requirement is that i have to reboot two servers at same time (exactly same timestamp) . So my plan is to create two shell script that will ssh to the server and trigger the reboot. My doubt is
How can i run same shell script on two server at same time. (same timestamp)
Even if i run Script1 &; Script2. This will not ensure that reboot will be issued at same time, minor time difference will be there.
If you are doing it remotely, you could use a terminal emulator with broadcast input, so that what you type is sent to all sessions of the open terminal. On Linux tmux is one such emulator.
The other easiest way is write a shell script which waits for the same timestamp on both machines and then both reboot.
First, make sure both machine's time are aligned (use the best implementation of http://en.wikipedia.org/wiki/Network_Time_Protocol and your system's related utilities).
Then,
If you need this just one time: on each servers do a
echo /path/to/your/script | at ....
(.... being when you want it. See man at).
If you need to do it several times: use crontab instead of at
(see man cron and man crontab)

Resources