Unable to run script remotely on VIO servers? - shell

I need to run an KSH script in VIO server remotely. But as VIO server is in restricted shell, I tried as below.
ssh -q -T padmin#vioserver "oem_setup_env" < script.ksh
This was worked fine last time, But when try again today I found this throwing an error.
rksh: oem_setup_env: not found
Can someone suggest how to run remotely on VIO servers.

I assume you are using keys so you can log in without using a password. If the previous sentence makes no sense to you, we can address that as well. Just ask.
VIOS is just AIX so it has a root user. You can find the path of root’s home with echo ~root. As I recall, it is usually /. So, become root by doing the oem_setup_env. Create ~root/.ssh. Copy your public key into ~root/.ssh/authorized_keys. Check all the permissions. They should be owned by root, and be either 0700 or 0600 permissions (not readable nor writable by others). Then use ssh root#host ...

I tumbled upon this question when looking for a way to run commands as root through ssh. I found this to work:
ssh padmin#vioshost "echo lparstat -i | oem_setup_env"
lparstat could be substituted by and commands to be run as root.

Related

How to return to the script once sudo is done

I am on a server and running a script file that has following code.
ssh username#servername
sudo su - root
cd /abc/xyz
mkdir asdfg
I am able to ssh... but then the next command is not working.. the script is not sudo-ing. any idea?
Edit: Able to create a mech id and then do the things.. though still looking for the answer to above question :|
First of all your command will "stuck" on the first line because it will go into an interactive mode. The ssh command will require a password to be provided by a user (unless there is an sshkey being used) . And if the ssh is logged into the remote server then it will wait for user commands from standard input.
Secondly the lines following the ssh command will be executed only when the first process has exited. This is why your script is not "sudoing" - it's waiting for the ssh to end.
So if your point is to run a command on a remote server then put the command as a parameter into the same line as ssh connection. In your case:
ssh user#server sudo su - root
But this will not be of satisfaction for you. I suggest you create a script of what you want to execute on the remote server and then execute the script.
ssh user#server scriptName
The sudo thing here is very tricky because again your script might get stuck in the interactive mode waiting for a password to be inserted so I suggest you think again on the basis of the script.
mb47!
You want to run the script on the remote computer, correct?
On the remote machine, create a file containing the commands you would like to execute.
Then, on the other machine, run ssh user#machine /path/to/script/you/created/earlier
I hope this helps!
ALinuxLover

Injecting bash prompt to remote host via ssh

I have a fancy prompt working well on my local machine. However, I'm logging to multiple machines, on different accounts via ssh. I would love to have my prompt synchronized everywhere by ssh command itself.
Any idea how to get that? In many cases I'm accessing machines using root account and I can't change permanently any settings there. I want the prompt synchronized.
In principle this is just setting variable PS1.
Try this :
ssh -l root host -t "bash --rcfile /path/to/special/bashrc"
maybe /path/to/special/bashrc can be /tmp/myrc by example

cp command fails when run in a script called by Hudson

This one is a puzzler. If I run a command from the command line to copy a file remotely it works perfectly. If I run that same command inside a script on the server (that hosts Hudson), it runs perfectly as well, same for running the job as hudson from the command line. However, if I run that exact command as a function inside a bash script from a Hudson job, it fails with:
cp: cannot stat '/opt/flash_board.tar.gz': No such file or directory
The variable is defined as:
original_tarball=flash_board.tar.gz
and is in scope (variable expansion works correctly in the script).
The original command is:
ssh -n -o stricthostkeychecking=no root#$IP_ADDRESS ssh -n -o stricthostkeychecking=no 169.254.0.2 cp /opt/$original_tarball /opt/$original_tarball.bak
I've also tried it as:
ssh -n -p 1601 -o stricthostkeychecking=no root#$IP_ADDRESS cp /opt/$original_tarball /opt/$original_tarball.bak
which points to the correct port, but fails in exactly the same way.
For reference all the variables have been checked to be valid. I originally thought this was a substitution error, but that doesn't seem to be the case, so then I tried running it with Hudson credentials as:
sudo -u hudson ssh -n -o stricthostkeychecking=no root#$IP_ADDRESS ssh -n -o stricthostkeychecking=no 169.254.0.2 cp /opt/$original_tarball /opt/$original_tarball.bak
I get the exact same results (it works). So it's only when this command is run from a Hudson job that it fails.
Here's the sequence of events:
Hudson job sets parameters & calls a shell script.
A function inside the script tries to copy the files remotely from an embedded Montevista (Linux) board across an SPI bus to a second embedded Arago (Linux) board
Both boards are physically on the same mother board, but there's no way to directly access the Arago board except through a serial console session (which isn't feasible, this is an automation job that runs across the network).
I've tried this using ssh with -p 1601 (the correct port to the Arago side).
Can I use scp to copy a remote file to the same location as the remote file with a different file extension?
Something like:
scp -o stricthostkeychecking=no root#$IP_ADDRESS /opt/$original_tarball /opt/$original_tarball.bak
I had a couple of the devs take a look at this and they were stumped as well. Anyone got any ideas (A) why this fails & (B) how to work around it. I'm pretty sure I can write a script to run locally on the remote machine, but that doesn't seem like it should be necessary.
Oh, and if I run the exact same command on the Montevista board (which means I don't have to go across the SPI bus (169.254.0.2), it works perfectly from the Hudson job.
So, this turned out to be something completely unrelated to the question. I broke the problem down into little pieces with a test Hudson script, adding more and more complexity from the original script till it failed as before.
It turned out to be pilot error, I'd written an if statement to differentiate between the two boards (Arago & Montevista) and then abstracted out the variables passed to the if statement to the point where it was ambiguous which board was being passed in, so the if logic always grabbed the first match (as it should) and the flash script I was trying to copy on the Arago board didn't exist on the Montevista board (well, it has a different name) so the error returned was absolutely correct.
Sorry for the spin up and thanks for all the effort to help.
cp: cannot stat '/opt/flash_board.tar.gz': No such file or directory
This is saying that Hudson cannot see the file. I would do a ls -la /opt in that shell script of yours. This will show you the permissions on the /opt directory, and whether your script can list that file.
While you're at it, do a du -f on the Hudson machine too and see if that /opt directory is a remote mount or something that could be problematic.
You've already said that you logged in as the user that runs the Hudson task and execute it from the workspace directory.
Right now, I suspect that the directory permission is an issue.
The obvious way that goes wrong is that somehow it is being run on the wrong machine, possibly due to either a line length limit, or to weird quoting issues.
I'd try changing the command to … uname -a or … hostname -f to see if you get the right machine. Or, alternatively, … cp /proc/cpuinfo /tmp/this-machine and then see which machine gets the file.
edit: I see now that OP has answered his own question. I guess I'll leave this here in case it helps any future visitors with similar issues. I guess I should add "or not running the command you thing you're running" to the reasons why it could happen.

ssh-agent and crontab -- is there a good way to get these to meet?

I wrote a simple script which mails out svn activity logs nightly to our developers. Until now, I've run it on the same machine as the svn repository, so I didn't have to worry about authentication, I could just use svn's file:/// address style.
Now I'm running the script on a home computer, accessing a remote repository, so I had to change to svn+ssh:// paths. With ssh-key nicely set up, I don't ever have to enter passwords for accessing the svn repository under normal circumstances.
However, crontab did not have access to my ssh-keys / ssh-agent. I've read about this problem a few places on the web, and it's also alluded to here, without resolution:
Why ssh fails from crontab but succedes when executed from a command line?
My solution was to add this to the top of the script:
### TOTAL HACK TO MAKE SSH-KEYS WORK ###
eval `ssh-agent -s`
This seems to work under MacOSX 10.6.
My question is, how terrible is this, and is there a better way?
In addition...
If your key have a passhphrase, keychain will ask you once (valid until you reboot the machine or kill the ssh-agent).
keychain is what you need! Just install it and add the follow code in your .bash_profile:
keychain ~/.ssh/id_dsa
So use the code below in your script to load the ssh-agent environment variables:
. ~/.keychain/$HOSTNAME-sh
Note: keychain also generates code to csh and fish shells.
Copied answer from https://serverfault.com/questions/92683/execute-rsync-command-over-ssh-with-an-ssh-agent-via-crontab
When you run ssh-agent -s, it launches a background process that you'll need to kill later. So, the minimum is to change your hack to something like:
eval `ssh-agent -s`
svn stuff
kill $SSH_AGENT_PID
However, I don't understand how this hack is working. Simply running an agent without also running ssh-add will not load any keys. Perhaps MacOS' ssh-agent is behaving differently than its manual page says it does.
I had a similar problem. My script (that relied upon ssh keys) worked when I ran it manually but failed when run with crontab.
Manually defining the appropriate key with
ssh -i /path/to/key
didn't work.
But eventually I found out that the SSH_AUTH_SOCK was empty when the crontab was running SSH. I wasn't exactly sure why, but I just
env | grep SSH
copied the returned value and added this definition to the head of my crontab.
SSH_AUTH_SOCK="/tmp/value-you-get-from-above-command"
I'm out of depth as to what's happening here, but it fixed my problem. The crontab runs smoothly now.
One way to recover the pid and socket of running ssh-agent would be.
SSH_AGENT_PID=`pgrep -U $USER ssh-agent`
for PID in $SSH_AGENT_PID; do
let "FPID = $PID - 1"
FILE=`find /tmp -path "*ssh*" -type s -iname "agent.$FPID"`
export SSH_AGENT_PID="$PID"
export SSH_AUTH_SOCK="$FILE"
done
This of course presumes that you have pgrep installed in the system and there is only one ssh-agent running or in case of multiple ones it will take the one which pgrep finds last.
My solution - based on pra's - slightly improved to kill process even on script failure:
eval `ssh-agent`
function cleanup {
/bin/kill $SSH_AGENT_PID
}
trap cleanup EXIT
ssh-add
svn-stuff
Note that I must call ssh-add on my machine (scientific linux 6).
To set up automated processes without automated password/passphrase hacks,
I use a separate IdentityFile that has no passphrase, and restrict the target machines' authorized_keys entries prefixed with from="automated.machine.com" ... etc..
I created a public-private keyset for the sending machine without a passphrase:
ssh-keygen -f .ssh/id_localAuto
(Hit return when prompted for a passphrase)
I set up a remoteAuto Host entry in .ssh/config:
Host remoteAuto
HostName remote.machine.edu
IdentityFile ~/.ssh/id_localAuto
and the remote.machine.edu:.ssh/authorized_keys with:
...
from="192.168.1.777" ssh-rsa ABCDEFGabcdefg....
...
Then ssh doesn't need the externally authenticated authorization provided by ssh-agent or keychain, so you can use commands like:
scp -p remoteAuto:watchdog ./watchdog_remote
rsync -Ca remoteAuto/stuff/* remote_mirror
svn svn+ssh://remoteAuto/path
svn update
...
Assuming that you already configured SSH settings and that script works fine from terminal, using the keychain is definitely the easiest way to ensure that script works fine in crontab as well.
Since keychain is not included in most of Unix/Linux derivations, here is the step by step procedure.
1. Download the appropriate rpm package depending on your OS version from http://pkgs.repoforge.org/keychain/. Example for CentOS 6:
wget http://pkgs.repoforge.org/keychain/keychain-2.7.0-1.el6.rf.noarch.rpm
2. Install the package:
sudo rpm -Uvh keychain-2.7.0-1.el6.rf.noarch.rpm
3. Generate keychain files for your SSH key, they will be located in ~/.keychain directory. Example for id_rsa:
keychain ~/.ssh/id_rsa
4. Add the following line to your script anywhere before the first command that is using SSH authentication:
source ~/.keychain/$HOSTNAME-sh
I personally tried to avoid to use additional programs for this, but everything else I tried didn't work. And this worked just fine.
Inspired by some of the other answers here (particularly vpk's) I came up with the following crontab entry, which doesn't require an external script:
PATH=/usr/bin:/bin:/usr/sbin:/sbin
* * * * * SSH_AUTH_SOCK=$(lsof -a -p $(pgrep ssh-agent) -U -F n | sed -n 's/^n//p') ssh hostname remote-command-here
Here is a solution that will work if you can't use keychain and if you can't start an ssh-agent from your script (for example, because your key is passphrase-protected).
Run this once:
nohup ssh-agent > .ssh-agent-file &
. ssh-agent-file
ssh-add # you'd enter your passphrase here
In the script you are running from cron:
# start of script
. ${HOME}/.ssh-agent-file
# now your key is available
Of course this allows anyone who can read '~/.ssh-agent-file' and the corresponding socket to use your ssh credentials, so use with caution in any multi-user environment.
Your solution works but it will spawn a new agent process every time as already indicated by some other answer.
I faced similar issues and I found this blogpost useful as well as the shell script by Wayne Walker mentioned in the blog on github.
Good luck!
Not enough reputation to comment on #markshep's answer, just wanted to add a simpler solution. lsof was not listing the socket for me without sudo, but find is enough:
* * * * * SSH_AUTH_SOCK="$(find /tmp/ -type s -path '/tmp/ssh-*/agent.*' -user $(whoami) 2>/dev/null)" ssh-command
The find command searches the /tmp directory for sockets whose full path name matches that of ssh agent socket files and are owned by the current user. It redirects stderr to /dev/null to ignore the many permission denied errors that will usually be produced by running find on directories that it doesn't have access to.
The solution assumes only one socket will be found for that user.
The target and path match might need modification for other distributions/ssh versions/configurations, should be straightforward though.

How can I automate running commands remotely over SSH to multiple servers in parallel?

I've searched around a bit for similar questions, but other than running one command or perhaps a few command with items such as:
ssh user#host -t sudo su -
However, what if I essentially need to run a script on (let's say) 15 servers at once. Is this doable in bash? In a perfect world I need to avoid installing applications if at all possible to pull this off. For argument's sake, let's just say that I need to do the following across 10 hosts:
Deploy a new Tomcat container
Deploy an application in the container, and configure it
Configure an Apache vhost
Reload Apache
I have a script that does all of that, but it relies on me logging into all the servers, pulling a script down from a repo, and then running it. If this isn't doable in bash, what alternatives do you suggest? Do I need a bigger hammer, such as Perl (Python might be preferred since I can guarantee Python is on all boxes in a RHEL environment thanks to yum/up2date)? If anyone can point to me to any useful information it'd be greatly appreciated, especially if it's doable in bash. I'll settle for Perl or Python, but I just don't know those as well (working on that). Thanks!
You can run a local script as shown by che and Yang, and/or you can use a Here document:
ssh root#server /bin/sh <<\EOF
wget http://server/warfile # Could use NFS here
cp app.war /location
command 1
command 2
/etc/init.d/httpd restart
EOF
Often, I'll just use the original Tcl version of Expect. You only need to have that on the local machine. If I'm inside a program using Perl, I do this with Net::SSH::Expect. Other languages have similar "expect" tools.
The issue of how to run commands on many servers at once came up on a Perl mailing list the other day and I'll give the same recommendation I gave there, which is to use gsh:
http://outflux.net/unix/software/gsh
gsh is similar to the "for box in box1_name box2_name box3_name" solution already given but I find gsh to be more convenient. You set up a /etc/ghosts file containing your servers in groups such as web, db, RHEL4, x86_64, or whatever (man ghosts) then you use that group when you call gsh.
[pdurbin#beamish ~]$ gsh web "cat /etc/redhat-release; uname -r"
www-2.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-2.foo.com: 2.6.9-78.0.1.ELsmp
www-3.foo.com: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
www-3.foo.com: 2.6.9-78.0.1.ELsmp
www-4.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-4.foo.com: 2.6.18-92.1.13.el5
www-5.foo.com: Red Hat Enterprise Linux Server release 5.2 (Tikanga)
www-5.foo.com: 2.6.18-92.1.13.el5
[pdurbin#beamish ~]$
You can also combine or split ghost groups, using web+db or web-RHEL4, for example.
I'll also mention that while I have never used shmux, its website contains a list of software (including gsh) that lets you run commands on many servers at once. Capistrano has already been mentioned and (from what I understand) could be on that list as well.
Take a look at Expect (man expect)
I've accomplished similar tasks in the past using Expect.
You can pipe the local script to the remote server and execute it with one command:
ssh -t user#host 'sh' < path_to_script
This can be further automated by using public key authentication and wrapping with scripts to perform parallel execution.
You can try paramiko. It's a pure-python ssh client. You can program your ssh sessions. Nothing to install on remote machines.
See this great article on how to use it.
To give you the structure, without actual code.
Use scp to copy your install/setup script to the target box.
Use ssh to invoke your script on the remote box.
pssh may be interesting since, unlike most solutions mentioned here, the commands are run in parallel.
(For my own use, I wrote a simpler small script very similar to GavinCattell's one, it is documented here - in french).
Have you looked at things like Puppet or Cfengine. They can do what you want and probably much more.
For those that stumble across this question, I'll include an answer that uses Fabric, which solves exactly the problem described above: Running arbitrary commands on multiple hosts over ssh.
Once fabric is installed, you'd create a fabfile.py, and implement tasks that can be run on your remote hosts. For example, a task to Reload Apache might look like this:
from fabric.api import env, run
env.hosts = ['host1#example.com', 'host2#example.com']
def reload():
""" Reload Apache """
run("sudo /etc/init.d/apache2 reload")
Then, on your local machine, run fab reload and the sudo /etc/init.d/apache2 reload command would get run on all the hosts specified in env.hosts.
You can do it the same way you did before, just script it instead of doing it manually. The following code remotes to machine named 'loca' and runs two commands there. What you need to do is simply insert commands you want to run there.
che#ovecka ~ $ ssh loca 'uname -a; echo something_else'
Linux loca 2.6.25.9 #1 (blahblahblah)
something_else
Then, to iterate through all the machines, do something like:
for box in box1_name box2_name box3_name
do
ssh $box 'commmands_to_run_everywhere'
done
In order to make this ssh thing work without entering passwords all the time, you'll need to set up key authentication. You can read about it at IBM developerworks.
You can run the same command on several servers at once with a tool like cluster ssh. The link is to a discussion of cluster ssh on the Debian package of the day blog.
Well, for step 1 and 2 isn't there a tomcat manager web interface; you could script that with curl or zsh with the libwww plug in.
For SSH you're looking to:
1) not get prompted for a password (use keys)
2) pass the command(s) on SSH's commandline, this is similar to rsh in a trusted network.
Other posts have shown you what to do, and I'd probably use sh too but I'd be tempted to use perl like ssh tomcatuser#server perl -e 'do-everything-on-one-line;' or you could do this:
either scp the_package.tbz tomcatuser#server:the_place/.
ssh tomcatuser#server /bin/sh <<\EOF
define stuff like TOMCAT_WEBAPPS=/usr/local/share/tomcat/webapps
tar xj the_package.tbz or rsync rsync://repository/the_package_place
mv $TOMCAT_WEBAPPS/old_war $TOMCAT_WEBAPPS/old_war.old
mv $THE_PLACE/new_war $TOMCAT_WEBAPPS/new_war
touch $TOMCAT_WEBAPPS/new_war [you don't normally have to restart tomcat]
mv $THE_PLACE/vhost_file $APACHE_VHOST_DIR/vhost_file
$APACHECTL restart [might need to login as apache user to move that file and restart]
EOF
You want DSH or distributed shell, which is used in clusters a lot. Here is the link: dsh
You basically have node groups (a file with lists of nodes in them) and you specify which node group you wish to run commands on then you would use dsh, like you would ssh to run commands on them.
dsh -a /path/to/some/command/or/script
It will run the command on all the machines at the same time and return the output prefixed with the hostname. The command or script has to be present on the system, so a shared NFS directory can be useful for these sorts of things.
Creates hostname ssh command of all machines accessed.
by Quierati
http://pastebin.com/pddEQWq2
#Use in .bashrc
#Use "HashKnownHosts no" in ~/.ssh/config or /etc/ssh/ssh_config
# If known_hosts is encrypted and delete known_hosts
[ ! -d ~/bin ] && mkdir ~/bin
for host in `cut -d, -f1 ~/.ssh/known_hosts|cut -f1 -d " "`;
do
[ ! -s ~/bin/$host ] && echo ssh $host '$*' > ~/bin/$host
done
[ -d ~/bin ] && chmod -R 700 ~/bin
export PATH=$PATH:~/bin
Ex Execute:
$for i in hostname{1..10}; do $i who;done
There is a tool called FLATT (FLexible Automation and Troubleshooting Tool) that allows you to execute scripts on multiple Unix/Linux hosts with a click of a button. It is a desktop GUI app that runs on Mac and Windows but there is also a command line java client.
You can create batch jobs and reuse on multiple hosts.
Requires Java 1.6 or higher.
Although it's a complex topic, I can highly recommend Capistrano.
I'm not sure if this method will work for everything that you want, but you can try something like this:
$ cat your_script.sh | ssh your_host bash
Which will run the script (which resides locally) on the remote server.
Just read a new blog using setsid without any further installation/configuration besides the mainstream kernel. Tested/Verified under Ubuntu14.04.
While the author has a very clear explanation and sample code as well, here's the magic part for a quick glance:
#----------------------------------------------------------------------
# Create a temp script to echo the SSH password, used by SSH_ASKPASS
#----------------------------------------------------------------------
SSH_ASKPASS_SCRIPT=/tmp/ssh-askpass-script
cat > ${SSH_ASKPASS_SCRIPT} <<EOL
#!/bin/bash
echo "${PASS}"
EOL
chmod u+x ${SSH_ASKPASS_SCRIPT}
# Tell SSH to read in the output of the provided script as the password.
# We still have to use setsid to eliminate access to a terminal and thus avoid
# it ignoring this and asking for a password.
export SSH_ASKPASS=${SSH_ASKPASS_SCRIPT}
......
......
# Log in to the remote server and run the above command.
# The use of setsid is a part of the machinations to stop ssh
# prompting for a password.
setsid ssh ${SSH_OPTIONS} ${USER}#${SERVER} "ls -rlt"
Easiest way I found without installing or configuring much software is using plain old tmux. Say you have 9 linux servers. Pick a box as your main. Start a tmux session:
tmux
Then create 9 split tmux panes by doing this 8 times:
ctrl-b + %
Now SSH into each box in each pane. You'll need to know some tmux shortcuts. To navigate, press:
ctrl+b <arrow-keys>
Once your logged in to all your boxes on each pane. Now turn on pane synchronization where it lets you type the same thing into each box:
ctrl+b :setw synchronize-panes on
now when you press any keys, it will show up on every pane. to turn it off, just make on to off. to cycle resize panes, press ctrl+b < space-bar >.
This works alot better for me since I need to see each terminal output as sometimes servers crash or hang for whatever reason when downloading or upgrade software. Any issues, you can just isolate and resolve individually.

Resources