Bash - create multiple virtual guests in one loop - bash

I'm working on a bash script (I just started learning bash) that involves creating virtual guests on a remote server. I do this by SSH'ing from server A to B and execute 2 different commands:
# create the images
$(ssh -n john#serverB.net "fallocate -l ${imgsize} /home/john/images/${imgname}")
and
# create the virtual machine
$(ssh -n john#serverB.net virt-install --bunch of options)
It is possible that these sets of commands have to be executed twice (if there need to be 2 virtual guests created) in a loop. When the second command is being run for the second time I sometimes get this error:
Domain installation still in progress.
This means I have to wait until the previous virtual guest is completed. How would I be able to do these operations in one loop? Can I run them asynchronously? Can I use threads? Or is there another way?
I have heard about the 'wait' command, but is that safe to use?

Check the man page for virt-install. You can use --wait=0 or --noautoconsole.
--wait=WAIT Amount of time to wait (in minutes) for a VM to complete its install. Without this option, virt-install will wait for the
console to close (not necessarily indicating the guest has shutdown),
or in the case of --noautoconsole, simply kick off the install and
exit. Any negative value will make virt-install wait indefinitely, a
value of 0 triggers the same results as noautoconsole. If the time
limit is exceeded, virt-install simply exits, leaving the virtual
machine in its current state.

Related

Process get terminated when I call system() with wait=FALSE

I'm trying to process videos on OpenCPU and because they are very big I want to call the "FFmpeg" process using "system" and allow it to keep working until it's done.
But I need to get the temporary "file directory" created by OpenCPU so I can pull that directory until the video conversion is done.
In order to do that i call the system function with the parameter wait=FALSE as shown bellow
This work fine if I use the library(opencpu) on my machine, but when I move this into the production environment (Ubuntu 14.x) the system call get interrupted just after starting.
Is this something that can be fixed using opencpu.confg? Or is it a bug?
ffmpeg_exe <- "/usr/bin/ffmpeg" # Linux path
exec_convert <- paste0("( ",ffmpeg_exe,' -i ',input_file,' ',convert_command,' ',output_file, ' 2> PROCESS_OUTPUT.txt ; ls > PROCESS_DONE.txt ',")")
system(exec_convert, wait=FALSE)
I just found out that on linux, OpenCPU does not allow for this behavior, It kills all child processes when the request returns. This is on purpose. It doesn't want orphan processes to run potentially forever on the server, opencpu is not designed for that purpose.

ntpd -qg: Use with timeout

working on Pi3
Situation: only one server in /etc/ntp.conf is given and this given address is invalid (no NTP-Server running on that address).
Problem: running ntpd -qg does never end, since there is no timeout like in ntpdate -t 60.
Question: Can one specify a timeout for ntpd? If not, how can you assure the process ends after time x?
For now on startup the pi executes a bash-script that tries to get actual time from given NTP-Server in /etc/ntp.conf and then hangs in the process since there is no NTP-Server available on that address. So the process is running from start and i can't call another ntpd until the initial ntpd-process is killed.
Any work around?
PS: I would like not to use ntpdate since it is tagged as a retiring package
EDIT:
The RPi3 is located in an isolated network. Online NTP-servers are no option in my case.
There is a timeout command usually shipped with coreutils that allows you to set timeout on any command (even if it does not support it on its own). E.g.
timeout 60 ntpd -qg
To run run ntpd -qg and have it time out after 60s. If the command finished, you should get its return value, if the timeout intervened, you get 124.

Long running scripts in Windows command line

I am running a script on Windows command line that takes multiple hours to finish executing. During this time, I am required to keep my computer open or the script stops. I was wondering if there are any tools that I can use which would keep the script running even if I put my computer to sleep (or shut the computer down). Thanks!
If computer is put to sleep or shut down, programs cannot run on it by definition of these states. Possible workarounds might include:
Running script on a permanently running remote machine (i.e. server)
Preventing computer to go to sleep

How can i run two commands exactly at same time on two different unix servers?

My requirement is that i have to reboot two servers at same time (exactly same timestamp) . So my plan is to create two shell script that will ssh to the server and trigger the reboot. My doubt is
How can i run same shell script on two server at same time. (same timestamp)
Even if i run Script1 &; Script2. This will not ensure that reboot will be issued at same time, minor time difference will be there.
If you are doing it remotely, you could use a terminal emulator with broadcast input, so that what you type is sent to all sessions of the open terminal. On Linux tmux is one such emulator.
The other easiest way is write a shell script which waits for the same timestamp on both machines and then both reboot.
First, make sure both machine's time are aligned (use the best implementation of http://en.wikipedia.org/wiki/Network_Time_Protocol and your system's related utilities).
Then,
If you need this just one time: on each servers do a
echo /path/to/your/script | at ....
(.... being when you want it. See man at).
If you need to do it several times: use crontab instead of at
(see man cron and man crontab)

Can a standalone ruby script (windows and mac) reload and restart itself?

I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart.
That's where I hit a roadblock. If I were using any sane platform, I could just do:
exec('ruby', __FILE__)
...and be done. However, I did the following test:
p Process.pid
sleep 1
exec('ruby', __FILE__)
...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along.
The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall.
So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.
After trying a number of solutions (including the one submitted by Byron Whitlock, which ultimately put me onto the path to a satisfactory end) I settled upon:
IO.popen("start cmd /C ruby.exe #{$0} #{ARGV.join(' ')}")
sleep 5
I found that if I didn't sleep at all after the popen, and just exited, the spawn would frequently (>50% of the time) fail. This is not cross-platform obviously, so in order to have the same behavior on the mac:
IO.popen("xterm -e \"ruby blah blah blah\"&")
The classic way to restart a program is to write another one that does it for you. so you spawn a process to restart.exe <args>, then die or exit; restart.exe waits until the calling script is no longer running, then starts the script again.

Resources