How to ignore reboot prompt in a ShellScript - shell

I am trying to create a Shellscript with the following commands.
#!/bin/bash
ipa-client-install --uninstall
/usr/local/sbin/new-clone.sh -i aws -s aws-dev
My problem is that the ipa-client-install --uninstall command prompts for a reboot at the end with the default value being no.
Here is the output.
Client uninstall complete. The original nsswitch.conf configuration
has been restored. You may need to restart services or reboot the
machine. Do you want to reboot the machine? [no]:
How can I supress the reboot dialog and just accept the default "no"?
How can I check to see if ipa-client-install is installed before attempting to remove it?
I am new to Shellscripting, so I am struggling a bit :-)
Please be safe.

You can use Linux pipes to take care of the prompt issue. To rpm -q will help you to check if the package is available.
Your final script would be like
#!/bin/bash
if rpm -q ip-client-install
then
echo no | ipa-client-install --uninstall
else
echo "Package not found"
fi
/usr/local/sbin/new-clone.sh -i aws -s aws-dev

Related

Send command after exec [duplicate]

I know it is not recommended, but is it at all possible to pass the user's password to scp?
I'd like to copy a file via scp as part of a batch job and the receiving server does, of course, need a password and, no, I cannot easily change that to key-based authentication.
Use sshpass:
sshpass -p "password" scp -r user#example.com:/some/remote/path /some/local/path
or so the password does not show in the bash history
sshpass -f "/path/to/passwordfile" scp -r user#example.com:/some/remote/path /some/local/path
The above copies contents of path from the remote host to your local.
Install :
ubuntu/debian
apt install sshpass
centos/fedora
yum install sshpass
mac w/ macports
port install sshpass
mac w/ brew
brew install https://raw.githubusercontent.com/kadwanev/bigboybrew/master/Library/Formula/sshpass.rb
just generate a ssh key like:
ssh-keygen -t rsa -C "your_email#youremail.com"
copy the content of ~/.ssh/id_rsa.pub
and lastly add it to the remote machines ~/.ssh/authorized_keys
make sure remote machine have the permissions 0700 for ~./ssh folder and 0600 for ~/.ssh/authorized_keys
If you are connecting to the server from Windows, the Putty version of scp ("pscp") lets you pass the password with the -pw parameter.
This is mentioned in the documentation here.
curl can be used as a alternative to scp to copy a file and it supports a password on the commandline.
curl --insecure --user username:password -T /path/to/sourcefile sftp://desthost/path/
You can script it with a tool like expect (there are handy bindings too, like Pexpect for Python).
You can use the 'expect' script on unix/terminal
For example create 'test.exp' :
#!/usr/bin/expect
spawn scp /usr/bin/file.txt root#<ServerLocation>:/home
set pass "Your_Password"
expect {
password: {send "$pass\r"; exp_continue}
}
run the script
expect test.exp
I hope that helps.
You may use ssh-copy-id to add ssh key:
$which ssh-copy-id #check whether it exists
If exists:
ssh-copy-id "user#remote-system"
Here is an example of how you do it with expect tool:
sub copyover {
$scp = Expect->spawn("/usr/bin/scp ${srcpath}/$file $who:${destpath}/$file");
$scp->expect(30,"ssword: ") || die "Never got password prompt from $dest:$!\n";
print $scp 'password' . "\n";
$scp->expect(30,"-re",'$\s') || die "Never got prompt from parent system:$!\n";
$scp->soft_close();
return;
}
Nobody mentioned it, but Putty scp (pscp) has a -pw option for password.
Documentation can be found here: https://the.earth.li/~sgtatham/putty/0.67/htmldoc/Chapter5.html#pscp
Once you set up ssh-keygen as explained above, you can do
scp -i ~/.ssh/id_rsa /local/path/to/file remote#ip.com:/path/in/remote/server/
If you want to lessen typing each time, you can modify your .bash_profile file and put
alias remote_scp='scp -i ~/.ssh/id_rsa /local/path/to/file remote#ip.com:/path/in/remote/server/
Then from your terminal do source ~/.bash_profile. Afterwards if you type remote_scp in your terminal it should run the scp command without password.
Here's a poor man's Linux/Python/Expect-like example based on this blog post: Upgrading simple shells to fully interactive
TTYs. I needed this for old machines where I can't install Expect or add modules to Python.
Code:
(
echo 'scp jmudd#mysite.com:./install.sh .'
sleep 5
echo 'scp-passwd'
sleep 5
echo 'exit'
) |
python -c 'import pty; pty.spawn("/usr/bin/bash")'
Output:
scp jmudd#mysite.com:install.sh .
bash-4.2$ scp jmudd#mysite.com:install.sh .
Password:
install.sh 100% 15KB 236.2KB/s 00:00
bash-4.2$ exit
exit
Make sure password authentication is enabled on the target server. If it runs Ubuntu, then open /etc/ssh/sshd_config on the server, find lines PasswordAuthentication=no and comment all them out (put # at the start of the line), save the file and run sudo systemctl restart ssh to apply the configuration. If there is no such line then you're done.
Add -o PreferredAuthentications="password" to your scp command, e.g.:
scp -o PreferredAuthentications="password" /path/to/file user#server:/destination/directory
make sure you have "expect" tool before, if not, do it
# apt-get install expect
create the a script file with following content. (# vi /root/scriptfile)
spawn scp /path_from/file_name user_name_here#to_host_name:/path_to
expect "password:"
send put_password_here\n;
interact
execute the script file with "expect" tool
# expect /root/scriptfile
copy files from one server to other server ( on scripts)
Install putty on ubuntu or other Linux machines. putty comes with pscp. we can copy files with pscp.
apt-get update
apt-get install putty
echo n | pscp -pw "Password#1234" -r user_name#source_server_IP:/copy_file_path/files /path_to_copy/files
For more options see pscp help.
Using SCP non interactively from Windows:
Install the community Edition of netcmdlets
Import Module
Use Send-PowerShellServerFile -AuthMode password -User MyUser -Password not-secure -Server YourServer -LocalFile C:\downloads\test.txt -RemoteFile C:\temp\test.txt for sending File with non-interactive password
In case if you observe a strict host key check error then use -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null options.
The complete example is as follows
sshpass -p "password" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#domain-name.com:/tmp/from/psoutput /tmp/to/psoutput
You can use below steps. This works for me!
Step1-
create a normal file suppose "fileWithScpPassword" which contains the ssh password for the destination server.
Step2- use sshpaas -f followed by password file name and then normal scp command.
sshpass -f "fileWithScpPassword" scp /filePathToUpload user#ip:/destinationPath/
One easy way I do this:
Use the same scp cmd as you use with ssh keys i.e
scp -C -i <path_to opens sshkey> <'local file_path'> user#<ip_address_VM>: <'remote file_path’>
for transferring file from local to remote
but instead of providing the correct <path_to_opensshkey>, use some garbage path. Due to wrong key path you will be asked for password instead and you can simply pass the password now to get the work done!
An alternative would be add the public half of the user's key to the authorized-keys file on the target system. On the system you are initiating the transfer from, you can run an ssh-agent daemon and add the private half of the key to the agent. The batch job can then be configured to use the agent to get the private key, rather than prompting for the key's password.
This should be do-able on either a UNIX/Linux system or on Windows platform using pageant and pscp.
All the solutions mentioned above can work only if you the app installed or you should have the admin rights to install except or sshpass.
I found this very useful link to simply start the scp in Background.
$ nohup scp file_to_copy user#server:/path/to/copy/the/file > nohup.out 2>&1
https://charmyin.github.io/scp/2014/10/07/run-scp-in-background/
I found this really helpful answer here.
rsync -r -v --progress -e ssh user#remote-system:/address/to/remote/file /home/user/
Not only you can pass there the password, but also it will show the progress bar when copying. Really awesome.

How to connect via scp without password using a shell? [duplicate]

I know it is not recommended, but is it at all possible to pass the user's password to scp?
I'd like to copy a file via scp as part of a batch job and the receiving server does, of course, need a password and, no, I cannot easily change that to key-based authentication.
Use sshpass:
sshpass -p "password" scp -r user#example.com:/some/remote/path /some/local/path
or so the password does not show in the bash history
sshpass -f "/path/to/passwordfile" scp -r user#example.com:/some/remote/path /some/local/path
The above copies contents of path from the remote host to your local.
Install :
ubuntu/debian
apt install sshpass
centos/fedora
yum install sshpass
mac w/ macports
port install sshpass
mac w/ brew
brew install https://raw.githubusercontent.com/kadwanev/bigboybrew/master/Library/Formula/sshpass.rb
just generate a ssh key like:
ssh-keygen -t rsa -C "your_email#youremail.com"
copy the content of ~/.ssh/id_rsa.pub
and lastly add it to the remote machines ~/.ssh/authorized_keys
make sure remote machine have the permissions 0700 for ~./ssh folder and 0600 for ~/.ssh/authorized_keys
If you are connecting to the server from Windows, the Putty version of scp ("pscp") lets you pass the password with the -pw parameter.
This is mentioned in the documentation here.
curl can be used as a alternative to scp to copy a file and it supports a password on the commandline.
curl --insecure --user username:password -T /path/to/sourcefile sftp://desthost/path/
You can script it with a tool like expect (there are handy bindings too, like Pexpect for Python).
You can use the 'expect' script on unix/terminal
For example create 'test.exp' :
#!/usr/bin/expect
spawn scp /usr/bin/file.txt root#<ServerLocation>:/home
set pass "Your_Password"
expect {
password: {send "$pass\r"; exp_continue}
}
run the script
expect test.exp
I hope that helps.
You may use ssh-copy-id to add ssh key:
$which ssh-copy-id #check whether it exists
If exists:
ssh-copy-id "user#remote-system"
Here is an example of how you do it with expect tool:
sub copyover {
$scp = Expect->spawn("/usr/bin/scp ${srcpath}/$file $who:${destpath}/$file");
$scp->expect(30,"ssword: ") || die "Never got password prompt from $dest:$!\n";
print $scp 'password' . "\n";
$scp->expect(30,"-re",'$\s') || die "Never got prompt from parent system:$!\n";
$scp->soft_close();
return;
}
Nobody mentioned it, but Putty scp (pscp) has a -pw option for password.
Documentation can be found here: https://the.earth.li/~sgtatham/putty/0.67/htmldoc/Chapter5.html#pscp
Once you set up ssh-keygen as explained above, you can do
scp -i ~/.ssh/id_rsa /local/path/to/file remote#ip.com:/path/in/remote/server/
If you want to lessen typing each time, you can modify your .bash_profile file and put
alias remote_scp='scp -i ~/.ssh/id_rsa /local/path/to/file remote#ip.com:/path/in/remote/server/
Then from your terminal do source ~/.bash_profile. Afterwards if you type remote_scp in your terminal it should run the scp command without password.
Here's a poor man's Linux/Python/Expect-like example based on this blog post: Upgrading simple shells to fully interactive
TTYs. I needed this for old machines where I can't install Expect or add modules to Python.
Code:
(
echo 'scp jmudd#mysite.com:./install.sh .'
sleep 5
echo 'scp-passwd'
sleep 5
echo 'exit'
) |
python -c 'import pty; pty.spawn("/usr/bin/bash")'
Output:
scp jmudd#mysite.com:install.sh .
bash-4.2$ scp jmudd#mysite.com:install.sh .
Password:
install.sh 100% 15KB 236.2KB/s 00:00
bash-4.2$ exit
exit
Make sure password authentication is enabled on the target server. If it runs Ubuntu, then open /etc/ssh/sshd_config on the server, find lines PasswordAuthentication=no and comment all them out (put # at the start of the line), save the file and run sudo systemctl restart ssh to apply the configuration. If there is no such line then you're done.
Add -o PreferredAuthentications="password" to your scp command, e.g.:
scp -o PreferredAuthentications="password" /path/to/file user#server:/destination/directory
make sure you have "expect" tool before, if not, do it
# apt-get install expect
create the a script file with following content. (# vi /root/scriptfile)
spawn scp /path_from/file_name user_name_here#to_host_name:/path_to
expect "password:"
send put_password_here\n;
interact
execute the script file with "expect" tool
# expect /root/scriptfile
copy files from one server to other server ( on scripts)
Install putty on ubuntu or other Linux machines. putty comes with pscp. we can copy files with pscp.
apt-get update
apt-get install putty
echo n | pscp -pw "Password#1234" -r user_name#source_server_IP:/copy_file_path/files /path_to_copy/files
For more options see pscp help.
Using SCP non interactively from Windows:
Install the community Edition of netcmdlets
Import Module
Use Send-PowerShellServerFile -AuthMode password -User MyUser -Password not-secure -Server YourServer -LocalFile C:\downloads\test.txt -RemoteFile C:\temp\test.txt for sending File with non-interactive password
In case if you observe a strict host key check error then use -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null options.
The complete example is as follows
sshpass -p "password" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#domain-name.com:/tmp/from/psoutput /tmp/to/psoutput
You can use below steps. This works for me!
Step1-
create a normal file suppose "fileWithScpPassword" which contains the ssh password for the destination server.
Step2- use sshpaas -f followed by password file name and then normal scp command.
sshpass -f "fileWithScpPassword" scp /filePathToUpload user#ip:/destinationPath/
One easy way I do this:
Use the same scp cmd as you use with ssh keys i.e
scp -C -i <path_to opens sshkey> <'local file_path'> user#<ip_address_VM>: <'remote file_path’>
for transferring file from local to remote
but instead of providing the correct <path_to_opensshkey>, use some garbage path. Due to wrong key path you will be asked for password instead and you can simply pass the password now to get the work done!
An alternative would be add the public half of the user's key to the authorized-keys file on the target system. On the system you are initiating the transfer from, you can run an ssh-agent daemon and add the private half of the key to the agent. The batch job can then be configured to use the agent to get the private key, rather than prompting for the key's password.
This should be do-able on either a UNIX/Linux system or on Windows platform using pageant and pscp.
All the solutions mentioned above can work only if you the app installed or you should have the admin rights to install except or sshpass.
I found this very useful link to simply start the scp in Background.
$ nohup scp file_to_copy user#server:/path/to/copy/the/file > nohup.out 2>&1
https://charmyin.github.io/scp/2014/10/07/run-scp-in-background/
I found this really helpful answer here.
rsync -r -v --progress -e ssh user#remote-system:/address/to/remote/file /home/user/
Not only you can pass there the password, but also it will show the progress bar when copying. Really awesome.

Shell Script to run on Local and Remote machine

I am new to shell scripting,
I am trying to write a script that'll run on my local machine.
Few of it's commands are to run on my local and then a few on the remote server.
Below is a sample script -
The 1st two will run on my local system,
rest of them are to run on the remote server.
eg -
scp -i permissions.pem someJar.jar ubuntu#ip:/var/tmp
ssh -i permissions.pem ubuntu#ip
sudo su
service someService stop
rm -rf /home/ubuntu/someJar.jar
rm -rf /home/ubuntu/loggingFile.log
mv /var/tmp/someJar.jar .
service someService start
As the script will run on my local machine,
How do make sure the 3rd and further commands take effect on the remote server and not on my machine?
Here's my sample.sh file -
scp -i permissions.pem someJar.jar ubuntu#ip:/var/tmp
SCRIPT="sudo su; ps aux | grep java; service someService stop; ps aux | grep java; service someService start; ps aux | grep java;"
ssh -i permissions.pem ubuntu#ip $SCRIPT
The scp is working, nothing is displayed after that.
You need to pass the reset of the script as a parameter to SSH. Try this format:
SCRIPT="sudo su; command1; command2; command3;"
ssh -i permissions.pem ubuntu#ip $SCRIPT
See: http://linuxcommand.org/man_pages/ssh1.html
Hope this helps.
Update:
The reason why you don't see anything after running the command is because sudo expects the password. To avoid this there are three solutions:
Give ubuntu user needed permissions to perform all the tasks in the script.
Pass the password to sudo under SCRIPT: echo 'password' | sudo su; ...
Modify sudo-er file and allow ubuntu user to sudo without prompting for password. Run sudo visudo and add the following line: ubuntu ALL = NOPASSWD : ALL
Each system admin will have a different approach to this problem. I think everyone will agree that option 2 is the least secure. The reset is up to debate. In my opinion option 3 is slightly more secure. Yet, entire server is compromised if your key is compromised. The most secure is option 1. While it is painful to assign individual permissions, by doing so you are limiting your exposure to assigned permissions in case if your key is compromised.
One more note: It might be beneficial to replace ; with && in the SCRIPT. By doing so you will ensure that second command will run only if the first one finished successfully, third will run only if second finished successfully and so on.

Why my rpm installation hang while played remotely

I have an AIX 6.1 server where I want to uninstall a rpm.
This uninstallation can be done directly on the server :
[user#server]$ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
This uninstallation is working.
I have a script lauching this unstallation :
Uninstall.sh
#!/usr/bin/bash
set -x
sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
I can play this script on the server without any problem :
[user#server]$ cd /where/is/the/script;./Uninstall.sh
+ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
_MyRPM-1.0.0 has been uninstalled successfully
But when I'm playing this script remotely the rpm hang :
[user#client]$ ssh user#server "cd /where/is/the/script;./Uninstall.sh"
+ sudo /usr/bin/rpm -e --allmatches _MyRPM-1.0.0
And this command hang, I need to kill it in order to end the ssh.
PS : I have exactly the same comportment for installation or uninstallation.
EDIT :
The problem seems coming from the sudo. The hang problem appears also when I'm doing anithing with a sudo.
For example with a new script :
test.sh
#!/usr/bin/bash
set -x
sudo env
Sudo normally requires a user authenticate as themselves, and if I recall it can act different via remote execution due to the way the terminal is handled.
I don't have a system to test this on at the moment, but but you could try ssh's -t or -T switches:
-T Disable pseudo-tty allocation.
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
I suspect you could get this to work by adding the script you're remotely executing into /etc/sudoers:
{user} ALL=NOPASSWD:/where/is/the/script/Uninstall.sh
Then try:
"ssh -t user#server /where/is/the/script/Uninstall.sh"
EDIT:
Found some details to help explain why sudo is behaving differently when executed remotely:
http://www.sudo.ws/sudoers.man.html
The sudoers security policy requires that
most users authenticate themselves before they can use sudo. A
password is not required if the invoking user is root, if the target
user is the same as the invoking user, or if the policy has disabled
authentication for the user or command.
Perhaps it's hanging because it's trying to authenticate, whereas locally it wouldn't need to do so.

How can I automate running commands remotely over SSH to multiple servers in parallel?

I've searched around a bit for similar questions, but other than running one command or perhaps a few command with items such as:
ssh user#host -t sudo su -
However, what if I essentially need to run a script on (let's say) 15 servers at once. Is this doable in bash? In a perfect world I need to avoid installing applications if at all possible to pull this off. For argument's sake, let's just say that I need to do the following across 10 hosts:
Deploy a new Tomcat container
Deploy an application in the container, and configure it
Configure an Apache vhost
Reload Apache
I have a script that does all of that, but it relies on me logging into all the servers, pulling a script down from a repo, and then running it. If this isn't doable in bash, what alternatives do you suggest? Do I need a bigger hammer, such as Perl (Python might be preferred since I can guarantee Python is on all boxes in a RHEL environment thanks to yum/up2date)? If anyone can point to me to any useful information it'd be greatly appreciated, especially if it's doable in bash. I'll settle for Perl or Python, but I just don't know those as well (working on that). Thanks!
You can run a local script as shown by che and Yang, and/or you can use a Here document:
ssh root#server /bin/sh <<\EOF
wget http://server/warfile # Could use NFS here
cp app.war /location
command 1
command 2
/etc/init.d/httpd restart
EOF
Often, I'll just use the original Tcl version of Expect. You only need to have that on the local machine. If I'm inside a program using Perl, I do this with Net::SSH::Expect. Other languages have similar "expect" tools.
The issue of how to run commands on many servers at once came up on a Perl mailing list the other day and I'll give the same recommendation I gave there, which is to use gsh:
http://outflux.net/unix/software/gsh
gsh is similar to the "for box in box1_name box2_name box3_name" solution already given but I find gsh to be more convenient. You set up a /etc/ghosts file containing your servers in groups such as web, db, RHEL4, x86_64, or whatever (man ghosts) then you use that group when you call gsh.
[pdurbin#beamish ~]$ gsh web "cat /etc/redhat-release; uname -r"
www-2.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-2.foo.com: 2.6.9-78.0.1.ELsmp
www-3.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-3.foo.com: 2.6.9-78.0.1.ELsmp
www-4.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-4.foo.com: 2.6.18-92.1.13.el5
www-5.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-5.foo.com: 2.6.18-92.1.13.el5
[pdurbin#beamish ~]$
You can also combine or split ghost groups, using web+db or web-RHEL4, for example.
I'll also mention that while I have never used shmux, its website contains a list of software (including gsh) that lets you run commands on many servers at once. Capistrano has already been mentioned and (from what I understand) could be on that list as well.
Take a look at Expect (man expect)
I've accomplished similar tasks in the past using Expect.
You can pipe the local script to the remote server and execute it with one command:
ssh -t user#host 'sh' < path_to_script
This can be further automated by using public key authentication and wrapping with scripts to perform parallel execution.
You can try paramiko. It's a pure-python ssh client. You can program your ssh sessions. Nothing to install on remote machines.
See this great article on how to use it.
To give you the structure, without actual code.
Use scp to copy your install/setup script to the target box.
Use ssh to invoke your script on the remote box.
pssh may be interesting since, unlike most solutions mentioned here, the commands are run in parallel.
(For my own use, I wrote a simpler small script very similar to GavinCattell's one, it is documented here - in french).
Have you looked at things like Puppet or Cfengine. They can do what you want and probably much more.
For those that stumble across this question, I'll include an answer that uses Fabric, which solves exactly the problem described above: Running arbitrary commands on multiple hosts over ssh.
Once fabric is installed, you'd create a fabfile.py, and implement tasks that can be run on your remote hosts. For example, a task to Reload Apache might look like this:
from fabric.api import env, run
env.hosts = ['host1#example.com', 'host2#example.com']
def reload():
""" Reload Apache """
run("sudo /etc/init.d/apache2 reload")
Then, on your local machine, run fab reload and the sudo /etc/init.d/apache2 reload command would get run on all the hosts specified in env.hosts.
You can do it the same way you did before, just script it instead of doing it manually. The following code remotes to machine named 'loca' and runs two commands there. What you need to do is simply insert commands you want to run there.
che#ovecka ~ $ ssh loca 'uname -a; echo something_else'
Linux loca 2.6.25.9 #1 (blahblahblah)
something_else
Then, to iterate through all the machines, do something like:
for box in box1_name box2_name box3_name
do
ssh $box 'commmands_to_run_everywhere'
done
In order to make this ssh thing work without entering passwords all the time, you'll need to set up key authentication. You can read about it at IBM developerworks.
You can run the same command on several servers at once with a tool like cluster ssh. The link is to a discussion of cluster ssh on the Debian package of the day blog.
Well, for step 1 and 2 isn't there a tomcat manager web interface; you could script that with curl or zsh with the libwww plug in.
For SSH you're looking to:
1) not get prompted for a password (use keys)
2) pass the command(s) on SSH's commandline, this is similar to rsh in a trusted network.
Other posts have shown you what to do, and I'd probably use sh too but I'd be tempted to use perl like ssh tomcatuser#server perl -e 'do-everything-on-one-line;' or you could do this:
either scp the_package.tbz tomcatuser#server:the_place/.
ssh tomcatuser#server /bin/sh <<\EOF
define stuff like TOMCAT_WEBAPPS=/usr/local/share/tomcat/webapps
tar xj the_package.tbz or rsync rsync://repository/the_package_place
mv $TOMCAT_WEBAPPS/old_war $TOMCAT_WEBAPPS/old_war.old
mv $THE_PLACE/new_war $TOMCAT_WEBAPPS/new_war
touch $TOMCAT_WEBAPPS/new_war [you don't normally have to restart tomcat]
mv $THE_PLACE/vhost_file $APACHE_VHOST_DIR/vhost_file
$APACHECTL restart [might need to login as apache user to move that file and restart]
EOF
You want DSH or distributed shell, which is used in clusters a lot. Here is the link: dsh
You basically have node groups (a file with lists of nodes in them) and you specify which node group you wish to run commands on then you would use dsh, like you would ssh to run commands on them.
dsh -a /path/to/some/command/or/script
It will run the command on all the machines at the same time and return the output prefixed with the hostname. The command or script has to be present on the system, so a shared NFS directory can be useful for these sorts of things.
Creates hostname ssh command of all machines accessed.
by Quierati
http://pastebin.com/pddEQWq2
#Use in .bashrc
#Use "HashKnownHosts no" in ~/.ssh/config or /etc/ssh/ssh_config
# If known_hosts is encrypted and delete known_hosts
[ ! -d ~/bin ] && mkdir ~/bin
for host in `cut -d, -f1 ~/.ssh/known_hosts|cut -f1 -d " "`;
do
[ ! -s ~/bin/$host ] && echo ssh $host '$*' > ~/bin/$host
done
[ -d ~/bin ] && chmod -R 700 ~/bin
export PATH=$PATH:~/bin
Ex Execute:
$for i in hostname{1..10}; do $i who;done
There is a tool called FLATT (FLexible Automation and Troubleshooting Tool) that allows you to execute scripts on multiple Unix/Linux hosts with a click of a button. It is a desktop GUI app that runs on Mac and Windows but there is also a command line java client.
You can create batch jobs and reuse on multiple hosts.
Requires Java 1.6 or higher.
Although it's a complex topic, I can highly recommend Capistrano.
I'm not sure if this method will work for everything that you want, but you can try something like this:
$ cat your_script.sh | ssh your_host bash
Which will run the script (which resides locally) on the remote server.
Just read a new blog using setsid without any further installation/configuration besides the mainstream kernel. Tested/Verified under Ubuntu14.04.
While the author has a very clear explanation and sample code as well, here's the magic part for a quick glance:
#----------------------------------------------------------------------
# Create a temp script to echo the SSH password, used by SSH_ASKPASS
#----------------------------------------------------------------------
SSH_ASKPASS_SCRIPT=/tmp/ssh-askpass-script
cat > ${SSH_ASKPASS_SCRIPT} <<EOL
#!/bin/bash
echo "${PASS}"
EOL
chmod u+x ${SSH_ASKPASS_SCRIPT}
# Tell SSH to read in the output of the provided script as the password.
# We still have to use setsid to eliminate access to a terminal and thus avoid
# it ignoring this and asking for a password.
export SSH_ASKPASS=${SSH_ASKPASS_SCRIPT}
......
......
# Log in to the remote server and run the above command.
# The use of setsid is a part of the machinations to stop ssh
# prompting for a password.
setsid ssh ${SSH_OPTIONS} ${USER}#${SERVER} "ls -rlt"
Easiest way I found without installing or configuring much software is using plain old tmux. Say you have 9 linux servers. Pick a box as your main. Start a tmux session:
tmux
Then create 9 split tmux panes by doing this 8 times:
ctrl-b + %
Now SSH into each box in each pane. You'll need to know some tmux shortcuts. To navigate, press:
ctrl+b <arrow-keys>
Once your logged in to all your boxes on each pane. Now turn on pane synchronization where it lets you type the same thing into each box:
ctrl+b :setw synchronize-panes on
now when you press any keys, it will show up on every pane. to turn it off, just make on to off. to cycle resize panes, press ctrl+b < space-bar >.
This works alot better for me since I need to see each terminal output as sometimes servers crash or hang for whatever reason when downloading or upgrade software. Any issues, you can just isolate and resolve individually.

Resources