running multiple commands through ssh and storing the outputs in different files - bash

i've set up my public and private keys and have automated ssh login. I want to execute two commands say command1 and command2 in one login session and store them in files command1.txt and command2.txt on the local machine.
i'm using this code
ssh -i my_key user#ip 'command1 command2' and the two commands get executed in one login but i have no clue as to how to store them in 2 different files.
I want to do so because i dont want to repeatedly ssh into my remote host.

Unless you can parse the actual outputs of the two commands and distinguish which is which, you can't. You will need two separate ssh sessions:
ssh -i my_key user#ip command1 > command1.txt
ssh -i my_key user#ip command2 > command2.txt
You could also redirect the outputs to files on the remote machine and then copy them to your local machine:
ssh -i my_key user#ip 'command1 > command1.txt; command2 > command2.txt'
scp -i my_key user#ip:'command*.txt' .

NO, you will have to do it separately in separate command (multiple login) as already mentioned by #lanzz. To save the output in local, do like
ssh -i my_key user#ip "command1" > .\file_on_local_host.txt
In case, you want to run multiple command in a single login, then jot all your command in a script and then run that script through SSH, instead running multiple command.

It's possible, but probably more trouble than it's worth. If you can generate a unique string that is guaranteed not to be in the output of command1, you can do:
$ ssh remote 'cmd1; echo unique string; cmd2' |
awk '/^unique string$/ { output="cmd2"; next } { print > output }' output=cmd1
This simply starts printing to the file cmd1, and then changes output to the file cmd2 when it sees the unique string. You'll probably want to handle stderr as well. That's left as an exercise for the reader.

option 1. Tell your boss he's being silly. Unless, of course, he isn't and there is critical reason of needing it all in one session. For some reason such a case escapes my imagination.
option 2. why not tar?
ssh -i my_key user#ip 'command1 > out1; command2 > out2; tar cf - out*' | tar xf -

You can do this. Assuming you can set up authentication from the remote machine back to the local machine, you can use ssh to pipe the output of the commands back. The trick is getting the backslashes right.
ssh remotehost command1 \| ssh localhost cat \\\> command1.txt \; command2 \| ssh localhost cat \\\> command2.txt
Or if you aren't so into backslashes...
ssh remotehost 'command1 | ssh localhost cat \> command1.txt ; command2 | ssh localhost cat \> command2.txt'

join them using && so you can have it like this
ssh -i my_key user#ip "command1 > command1.txt && command2 > command2.txt && command3 > command3.txt"
Hope this helps

I was able to, here's exactly what I did:
ssh root#your_host "netstat -an;hostname;uname -a"
This performs the commands in order and cat'd them onto my screen perfectly.
Make sure you start and finish with the quotation marks, else it'll run the first command remotely then run the remainder of the commands against your local machine.
I have an rsa key pair to my server, so if you want to avoid credential check then obviously you have to make that pair.

I think this is what you need:
At first you need to install sshpass on your machine.
then you can write your own script:
while read pass port user ip; do
sshpass -p$pass ssh -p $port $user#$ip <<ENDSSH1
COMMAND 1 > file1
.
.
.
COMMAND n > file2
ENDSSH1
done <<____HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
____HERE

How to run multiple command on remote server using single ssh conection.
[root#nismaster ~]# ssh 192.168.122.169 "uname -a;hostname"
root#192.168.122.169's password:
Linux nisclient2 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 i686 i386 GNU/Linux
nisclient2
OR
[root#nismaster ~]# ssh 192.168.122.169 "uname -a && hostname"
root#192.168.122.169's password:
Linux nisclient2 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 i686 i386 GNU/Linux
nisclient2

Related

Remote login (ssh differences)

I would like to know what is the difference between the below commands:
ssh vagrant#someipaddress
cd /home/vagrant/
grep -i "something" data.txt
and
ssh vagrant#someipaddress 'cd /home/vagrant; cat data.txt' | grep -i "something"
From this website it mentions, that you can send multiple commands to the remote server. Is the second option actually logging into the server? What is the benefit in this second approach?
Strictly Speaking from the example provided:
The first command:
Logs onto the remote server
Executes a couple commands, and
Stays logged on to the server
The second command runs half on the remote machine, logs out of the remote machine, and then pipes the output to grep on your local machine, all in one command line.
Breaking down what's happening:
ssh vagrant#someipaddress 'cd /home/vagrant; cat data.txt' | grep -i "something"
The section in bold is running on your local PC, based on the output from the ssh session
The 'quotes "contain" the entire command block
the " quotes "contain" the individual arguments within the command block.
You may have meant to do this:
ssh vagrant#someipaddress 'cd /home/vagrant; cat data.txt' | grep -i "something"
Where the bold section runs locally
Or you may have intentionally done this:
ssh vagrant#someipaddress 'cd /home/vagrant/ | grep -i "something" data.txt'
Where the entire command runs on the server.
Either way, the end result:
Is that you automatically log out of the remote machine, and the whole command sequence was executed in one hit.

How to copy echo 'x' to file during an ssh connection

I have a script which starts an ssh-connection.
so the variable $ssh start the ssh connection.
so $SSH hostname gives the hostname of the host where I ssh to.
Now I try to echo something and copy the output of the echo to a file.
SSH="ssh -tt -i key.pem user#ec2-instance"
When I perform a manual ssh to the host and perform:
sudo sh -c "echo 'DEVS=/dev/xvdbb' >> /etc/sysconfig/docker-storage-setup"
it works.
But when I perform
${SSH} sudo sh -c "echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup"
it does not seem to work.
EDIT:
Also using tee is working fine after performing an ssh manually but does not seem to work after the ssh in the script.sh
The echo command after an ssh of the script is happening on my real host (from where I'm running the script, not the host where I'm performing an ssh to). So the file on my real host is being changed and not the file on my host where I've performed an ssh to.
The command passed to ssh will be executed by the remote shell, so you need to add one level of quoting:
${SSH} "sudo sh -c \"echo 'DEVS=/dev/xvdb' > /etc/sysconfig/docker-storage-setup\""
The only thing you really need on the server is the writing though, so if you don't have password prompts and such you can get rid of some of this nesting:
echo 'DEVS=/dev/xvdb' | $SSH 'sudo tee /etc/sysconfig/docker-storage-setup'

Wait for a proccess to finish

Could you please help me with any hint about the below issue?
i have to send a command to a host (the command needs lot of time to execute and creates a file):
ssh uname1#host1 ssh uname2#host2 'command1'
after this command gets executed i need to zip the file created
ssh uname1#host1 ssh uname2#host2 'gzip file1'
Than do the same thing for another host
ssh uname3#host3 ssh uname4#host4 'command1'
ssh uname1#host1 ssh uname2#host2 'gzip file2'
Is it possible to run both this commands in parallel in order to save time for script execution?
thank you in advance.
try something like
ssh uname2#host2 'command1 && gzip file1' &
ssh uname2#host3 'command1 && gzip file1' &
ssh uname2#host4 'command1 && gzip file1' &
You can put all the commands in a file on the host you start from
&& in this context works like ; but the second command is only executed if the first works
Simply do:
ssh uname1#host1 ssh uname2#host2 'command1; gzip file1'
and if the gzip should be run only is the first command is a success, then :
ssh uname1#host1 ssh uname2#host2 'command1 && gzip file1'
The second command will be launched after the first one.

ssh login without welcome banner

I am using ssh from a program which sends commands to ssh and parses answers. However, each time I log in, I get the welcome banner like:
Linux mymachine 3.2.0-4-686-pae #1 SMP Debian 3.2.54-2 i686
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
...
I do not want this banner, because my parser would need to deal with it. Is it possible to login with ssh and not to get this banner at the beginning?
You should be able to silence this banner, and other diagnostic messages, by passing -q to SSH:
ssh -q user#remote_host
If you want to make -q permanent for all your SSH sessions, do:
echo "LogLevel QUIET" >> ~/.ssh/config
What works here seems to depend on the operating system, SSH version, and the server-side configuration of sshd.
For connecting to a stock Ubuntu 18 server ssh -q didn't work for me, and neither did ssh -o LogLevel=error that is suggested elsewhere.
What did work is the comment posted under the question about creating a .hushlogin file in the remote user's home directory:
$ ssh myuser#myhost
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-55-generic x86_64)
<snip>
Last login: Thu Aug 1 14:04:26 2019 from 1.2.3.4
myuser#myhost$ touch .hushlogin
myuser#myhost$ exit
Then:
$ ssh myuser#myhost echo 'Test'
Test
This will run command1 command2 and command3 on the remote_host.
ssh user#remote_host 'command1; command2; command3'
No banners are displayed.
Try ssh -q to supress the banner message
If you expect more than 1000 lines in the server answer then replace 1000 with a corresponding number or the server answer will be truncated.
# Demo script file creation \
DIVIDER="___"; echo "echo $DIVIDER; echo 100; echo 200; echo 300;" > "./test.sh"; \
# \
# Getting the answer without the banner \
ssh -q login#server.name < "./test.sh" | grep -A1000 -e "^$DIVIDER" | tail -n +2
Success
100
200
300
The same command without
| grep -A1000 -e "^$DIVIDER" | tail -n +2
gives
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux...
[...]
Run 'do-release-upgrade' to upgrade to it.
___
100
200
300
You can replace "___" (three underscores) with any exotic sign(s) or even password (which can't be found in the beginnings of lines of the banner).
To avoid the replacing 1000 with a corresponding number (and possible truncation of big server answers) search something about "how to grep all lines after match" and modify my code.
For running commands remotely:
#!/bin/bash
SCRIPT='
#Your commands
'
sshpass -p<pass> ssh -o 'StrictHostKeyChecking no' -p <port> user#host "$SCRIPT"
I answer my own question with the solution based on Keith Reynolds answer. I am using:
ssh my_host bash
allowing bash interaction without banner and without prompt.

How can I execute a script from my local machine in a specific (but variable) directory on a remote host?

From a previous question, I have found that it is possible to run a local script on a remote host using:
ssh -T remotehost < localscript.sh
Now, I need to allow others to specify the directory in which the script will be run on the remote host.
I have tried commands such as
ssh -T remotehost "cd /path/to/dir" < localscript.sh
ssh -T remotehost:/path/to/dir < localscript.sh
and I have even tried adding DIR=$1; cd $DIR to the script and using
ssh -T remotehost < localscript.sh "/path/to/dir/"
alas, none of these work. How am I supposed to do this?
echo 'cd /path/to/dir' | cat - localscript.sh | ssh -T remotehost
Note that if you're doing this for anything complex, it is very, very important that you think carefully about how you will handle errors in the remote system. It is very easy to write code that works just fine as long as the stars align. What is much harder - and often very necessary - is to write code that will provide useful debugging messages if stuff breaks for any reason.
Also you may want to look at the venerable tool http://en.wikipedia.org/wiki/Expect. It is often used for scripting things on remote machines. (And yes, error handling is a long term maintenance issue with it.)
Two more ways to change directory on the remote host (variably):
echo '#!/bin/bash
cd "$1" || exit 1
pwd -P
shift
printf "%s\n" "$#" | cat -n
exit
' > localscript.sh
ssh localhost 'bash -s "$#"' <localscript.sh '/tmp' 2 3 4 5
ssh localhost 'source /dev/stdin "$#"' <localscript.sh '/tmp' 2 3 4 5
# make sure it's the bash shell to source & execute the commands
#ssh -T localhost 'bash -c '\''source /dev/stdin "$#"'\''' _ <localscript.sh '/tmp' 2 3 4 5

Resources