How can i run two commands exactly at same time on two different unix servers? - shell

My requirement is that i have to reboot two servers at same time (exactly same timestamp) . So my plan is to create two shell script that will ssh to the server and trigger the reboot. My doubt is
How can i run same shell script on two server at same time. (same timestamp)
Even if i run Script1 &; Script2. This will not ensure that reboot will be issued at same time, minor time difference will be there.

If you are doing it remotely, you could use a terminal emulator with broadcast input, so that what you type is sent to all sessions of the open terminal. On Linux tmux is one such emulator.
The other easiest way is write a shell script which waits for the same timestamp on both machines and then both reboot.

First, make sure both machine's time are aligned (use the best implementation of http://en.wikipedia.org/wiki/Network_Time_Protocol and your system's related utilities).
Then,
If you need this just one time: on each servers do a
echo /path/to/your/script | at ....
(.... being when you want it. See man at).
If you need to do it several times: use crontab instead of at
(see man cron and man crontab)

Related

How to send input to a console/CLI program running on remote host using bash?

I have a script that I normally launch using the following syntax:
ssh -Yq user#host "xterm -e '. /home/user/bin/prog1 $arg1;prog2'"
(note: I've removed some of the complexities of the command, so please excuse if there are any syntax errors in the ssh command; it should not be relevant to the question)
This launches an xterm window that runs prog1, and after completion runs prog2. prog2 is a console-style program that performs some setup, then several seconds later waits for user input.
Is there a way via bash script (preferably without downloading external packages) that I can send data to prog2 that's running on $host?
I've looked into << and expect, but it's way over my head. My intuition is that there's probably a straightforward way of doing this, but I can't figure out what terms to search for. I also understand that I can remotely send keystrokes to a host using xdotools or something similar, but I'm hesitant to request a new package installation unless I know that's the only reasonable solution.
Thanks!

Bash - create multiple virtual guests in one loop

I'm working on a bash script (I just started learning bash) that involves creating virtual guests on a remote server. I do this by SSH'ing from server A to B and execute 2 different commands:
# create the images
$(ssh -n john#serverB.net "fallocate -l ${imgsize} /home/john/images/${imgname}")
and
# create the virtual machine
$(ssh -n john#serverB.net virt-install --bunch of options)
It is possible that these sets of commands have to be executed twice (if there need to be 2 virtual guests created) in a loop. When the second command is being run for the second time I sometimes get this error:
Domain installation still in progress.
This means I have to wait until the previous virtual guest is completed. How would I be able to do these operations in one loop? Can I run them asynchronously? Can I use threads? Or is there another way?
I have heard about the 'wait' command, but is that safe to use?
Check the man page for virt-install. You can use --wait=0 or --noautoconsole.
--wait=WAIT Amount of time to wait (in minutes) for a VM to complete its install. Without this option, virt-install will wait for the
console to close (not necessarily indicating the guest has shutdown),
or in the case of --noautoconsole, simply kick off the install and
exit. Any negative value will make virt-install wait indefinitely, a
value of 0 triggers the same results as noautoconsole. If the time
limit is exceeded, virt-install simply exits, leaving the virtual
machine in its current state.

Chain dependent bash commands

I'm trying to chain together two commands:
The first in which I start up postgres
The second in which I run a command meant for postgres(a benchmark, in this case)
As far as I know, the '||', ';', and '&/&&' operators all require that the first command terminate or exit somehow. This isn't the case with a server that's been started, so I'm not sure how to proceed. I can't run the two completely in parallel, as the server has to be started.
Thanks for the help!
I would recommend something along the lines of the following in a single bash script:
Start the Postgres server via a command like /etc/init.d/postgresql start or similar
Sleep for a period of time to give the server time to startup; perhaps a minute or two
Then run a psql command that connects to the server to test its up-ness
Tie that command together with your benchmark via &&, so it completes only if the psql command completes (depending on the exact return codes from psql, you may need to inspect the output from the command instead of the return code). The command run via psql would best be a simple query that connects to the server and returns a simple value that can be cross-checked.
Edit in response to comment from OP:
It depends on what you want to benchmark. If you just want to benchmark a command after the server has started, and don't want to restart the server every time, then I would tweak the code to run the psql up-ness test in a separate block, starting the server if not up, and then afterward, run the benchmark test command unconditionally.
If you do want to start the server up fresh each time (to test cold-start performance, or similar), then I would add another command after the benchmarked command to shutdown the server, and then sleep, re-running the test command to check for up-ness (where this time no up-ness is expected).
In other case you should be able to run the script multiple times.
A slight aside: If your test is destructive (that is, it writes to the DB), you may want to consider dumping a "clean" copy of the DB -- that is, the DB in its pre-test state -- and then creating a fresh DB, with a different name from the original, using that dump with each run of the script, dropping it beforehand.

How to ssh into a shell and run a script and leave myself at the prompt

I am using elastic map reduce from Amazon. I am sshing into hadoop master node and executing a script like.
$EMR_BIN/elastic-mapreduce --jobflow $JOBFLOW --ssh < hivescript.sh . It sshes me into the master node and runs the hive script. The hivescript contains the following lines
hive
add jar joda-time-1.6.jar;
add jar EmrHiveUtils-1.2.jar;
and some commands to create hive tables. The script runs fine and creates the hive tables and everything else, but comes back to the prompt from where I ran the script. How do I leave it sshed into hadoop master node at the hive prompt.
Consider using Expect, then you could do something along these lines and interact at the end:
/usr/bin/expect <<EOF
spawn ssh ... YourHost
expect "password"
send "password\n"
send javastuff
interact
EOF
These are the most common answers I've seen (with the drawbacks I ran into with them):
Use expect
This is probably the most well rounded solution for most people
I cannot control whether expect is installed in my target environments
Just to try this out anyway, I put together a simple expect script to ssh to a remote machine, send a simple command, and turn control over to the user. There was a long delay before the prompt showed up, and after fiddling with it with little success I decided to move on for the time being.
Eventually I came back to this as the final solution after realizing I had violated one of the 3 virtues of a good programmer -- false impatience.
Use screen / tmux to start the shell, then inject commands from an external process.
This works ok, but if the terminal window dies it leaves a screen/tmux instance hanging around. I could certainly try to come up with a way to just re-attach to prior instances or kill them; screen (and probably tmux) can make it die instead of auto-detaching, but I didn't fiddle with it.
If using gnome-terminal, use its -x or --command flag (I'm guessing xterm and others have similar options)
I'll go into more detail on problems I had with this on #4
Make a bash script with #!/bin/bash --init-file as the shebang; this will cause your script to execute, then leave an interactive shell running afterward
This and #3 had issues with some programs that required user interaction before the shell is presented to them. Some programs (like ssh) it worked fine with, others (telnet, vxsim) presented a prompt but no text was passed along to the program; only ctrl characters like ^C.
Do something like this: xterm -e 'commands; here; exec bash'. This will cause it to create an interactive shell after your commands execute.
This is fine as long as the user doesn't attempt to interrupt with ^C before the last command executes.
Currently, the only thing I've found that gives me the behavior I need is to use cmdtool from the OpenWin project.
/usr/openwin/bin/cmdtool -I 'commands; here'
# or
/usr/openwin/bin/cmdtool -I 'commands; here' /bin/bash --norc
The resulting terminal injects the list of commands passed with -I to the program executed (no parms means default shell), so those commands show up in that shell's history.
What I don't like is that the terminal cmdtool provides feels so clunky ... but alas.

Bash: Script output to terminal session stops but script finishes normal

I'm opening an ssh-session to a remote server and execute a larger (around 1000 lines) bash-script on the remote machine. It involves several very CPU-intensive calls which run for up to three minutes each. To track the scripts progress it echoes messages placed at several points in the script.
In general the script runs smoothly. From time to time the script runs trough (the resulting file on the remote machine is correct) but the output to the terminal stops. Ctrl-C doesn't help, no prompt, just a frozen session. top in a separate session shows normal execution of the script.
My question: How keep the session alive?
local machine:
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.9
BuildVersion: 13A603
remote machine:
$ lsb_release -d
Description: Ubuntu 12.04.3 LTS
Personally, I would recommend using screen or tmux on the remote terminal for exactly this reason.
Those apps will allow the remote process to continue even if your local SSH session times out.
http://www.bangmoney.org/presentations/screen.html
http://tmux.sourceforge.net/
Start a screen on the remote machine and run your command from it:
screen -S largeScript
And then
./yourLargeScript.sh
Whenever your ssh session gets frozen, you can kill it with ~.
If you ssh again, you can grab back your screen by:
screen -dr largeScript
Make it log to a file instead (perhaps via syslog), and tail that file from wherever is convenient for you. This also helps detach the script so you can run it headless, from a cron job, etc. Also, if the log file has read access for others, they too can monitor it.

Resources