ssh login without welcome banner - bash

I am using ssh from a program which sends commands to ssh and parses answers. However, each time I log in, I get the welcome banner like:
Linux mymachine 3.2.0-4-686-pae #1 SMP Debian 3.2.54-2 i686
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
...
I do not want this banner, because my parser would need to deal with it. Is it possible to login with ssh and not to get this banner at the beginning?

You should be able to silence this banner, and other diagnostic messages, by passing -q to SSH:
ssh -q user#remote_host
If you want to make -q permanent for all your SSH sessions, do:
echo "LogLevel QUIET" >> ~/.ssh/config

What works here seems to depend on the operating system, SSH version, and the server-side configuration of sshd.
For connecting to a stock Ubuntu 18 server ssh -q didn't work for me, and neither did ssh -o LogLevel=error that is suggested elsewhere.
What did work is the comment posted under the question about creating a .hushlogin file in the remote user's home directory:
$ ssh myuser#myhost
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-55-generic x86_64)
<snip>
Last login: Thu Aug 1 14:04:26 2019 from 1.2.3.4
myuser#myhost$ touch .hushlogin
myuser#myhost$ exit
Then:
$ ssh myuser#myhost echo 'Test'
Test

This will run command1 command2 and command3 on the remote_host.
ssh user#remote_host 'command1; command2; command3'
No banners are displayed.

Try ssh -q to supress the banner message

If you expect more than 1000 lines in the server answer then replace 1000 with a corresponding number or the server answer will be truncated.
# Demo script file creation \
DIVIDER="___"; echo "echo $DIVIDER; echo 100; echo 200; echo 300;" > "./test.sh"; \
# \
# Getting the answer without the banner \
ssh -q login#server.name < "./test.sh" | grep -A1000 -e "^$DIVIDER" | tail -n +2
Success
100
200
300
The same command without
| grep -A1000 -e "^$DIVIDER" | tail -n +2
gives
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux...
[...]
Run 'do-release-upgrade' to upgrade to it.
___
100
200
300
You can replace "___" (three underscores) with any exotic sign(s) or even password (which can't be found in the beginnings of lines of the banner).
To avoid the replacing 1000 with a corresponding number (and possible truncation of big server answers) search something about "how to grep all lines after match" and modify my code.

For running commands remotely:
#!/bin/bash
SCRIPT='
#Your commands
'
sshpass -p<pass> ssh -o 'StrictHostKeyChecking no' -p <port> user#host "$SCRIPT"

I answer my own question with the solution based on Keith Reynolds answer. I am using:
ssh my_host bash
allowing bash interaction without banner and without prompt.

Related

How to make the ssh with -o StrictHostKeyChecking=no running on the docker in order to ssh the host work without exiting the script execution?

I have a script, that is running inside the docker container some actions we need for some internal debugging purposes:
set -eu
echo "Starting i/o test for host"
IP_HOST=$(ip a | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" | grep 172.17 | awk 'NR==1{print $1}')
echo "Detected IP of host is $IP_HOST"
sshpass -p tcuser ssh -o StrictHostKeyChecking=no docker#localhost -t -t
echo "Then"
the output here produce exactly:
bash-5.0# sh /etc/cron.d/iotesthost.sh
Mon Mar 2 12:43:59 UTC 2020
Starting i/o test for host
Detected IP of host is 172.17.0.1
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
( '>')
/) TC (\ Core is distributed with ABSOLUTELY NO WARRANTY.
(/-_--_-\) www.tinycorelinux.net
and as the last line is reached, the script makes me exiting bash or crond execution. So I can't go ahead with processing other lines after sshpass/ssh, so I never reach echo "Then"
That is the reason of exiting the script execution and how to work it around, still keeping all the features of accepting keys (i need is as each time the docker container calls for the script it is new)
If I ignore -t -t, I'm getting error according to https://stackoverflow.com/a/7122115/1759063
see https://askubuntu.com/questions/87449/how-to-disable-strict-host-key-checking-in-ssh
or
echo "StrictHostKeyChecking no" >> /etc/ssh_config
for a global solution. Please note that there is no security check anymore then.

How to create a tunnel after ssh to remote machine using private key using bash script

I have a server where i will be logging into the server from my local machine and create a tunnel.
I have a bash script which is not creating tunnel
sshpass -p ${1} ssh ${2}#${3}
ssh -L <port1>:<host>:80 -i /home/<user>/private_key <user_ID>#<host2>
Result i am getting -
sh ssh_to_box.sh pwd username remotehost
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-145-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
144 packages can be updated.
0 updates are security updates.
New release '18.04.2 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
*** System restart required ***
Last login: Thu May 30 09:36:36 2019 from remotehost
$
Script is not doing tunneling.
Also it should do $bash
How to ssh to remote and create tunnel from that remote machine and keep the tunnel alive?
Probably you are in fact creating the tunnel, you should check if
ss -lptn | awk '{ print $4 }' | awk -F ':' '{ print $2 }' | sed -e '/<port1>/!d'
Is returning something when the ssh console is opened, if true you have a tunnel.
(May there is a better way to do this check but I didn't know very well awk)
If you want the tunnel to be persistent may you execute it inside tmux/screen or run nohup <tunnel_command> &
It's important to note that name will be dns resolved by the remote host also.
Solved the issue - sshpass -p ${1} ssh -t ${2}#${3} 'ssh -L '${4}':'${5}':'${6}' -i /home/_key '${2}'#'${7}''

Waiting for input from script that is running remotely via ssh

There is a script I'm running that I can not install on the remote machine.
clear && printf '\e[3J'
read -p "Please enter device: " pattern
read -p "Enter date: (YYYY-MM-DD): " date
pfix=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 5 | head -n 1)
mkdir /home/user/logCollectRes/"${pfix}"
ssh xxx.xxx.xxx.xxx 'bash -s' < /usr/local/bin/SearchAdvanced.sh ${pattern} ${date} ${pfix}
In that script, I would like to be able to use read.
ls -g *"${pattern}"*
read -p "Select one of these? [y/n] " "found";
I've tried adding the -n on the read as well as the -t -t option on ssh. As you can see the script presents information that is only seen once the script starts, so I can't use the read on local machine.
EDIT: So lets say server B stores syslogs for 5K computers. The file names are given by using the internal IP of the device and the date at the end.
/var/log/remotes/192.168.1.500201505050736.gz
/var/log/remotes/192.168.1.500201505050936.gz
/var/log/remotes/192.168.1.500201505051136.gz
/var/log/remotes/192.168.1.600201505050836.gz
/var/log/remotes/192.168.1.600201505051036.gz
/var/log/remotes/192.168.1.600201505051236.gz
I'd like to be able to select the IP address from the main script, list all the files matching that IP address, and then select which I want to scp to my local machine.
After speaking with some coworkers I found the answer to be running two scripts: The first pulls the ls -g result and directs the answer to a variable on the local machine. I then print that output with the read option of selecting on of the files. The 2nd script will take that answer and scp the file from the remote machine
In the main script
ssh xxx.xxx.xxx.xxx 'bash -s' < /usr/local/bin/SearchAdvanced.sh ${pattern} ${date} > ${result}
then as a follow up
printf "${result}"
read -p "Select file: "

running multiple commands through ssh and storing the outputs in different files

i've set up my public and private keys and have automated ssh login. I want to execute two commands say command1 and command2 in one login session and store them in files command1.txt and command2.txt on the local machine.
i'm using this code
ssh -i my_key user#ip 'command1 command2' and the two commands get executed in one login but i have no clue as to how to store them in 2 different files.
I want to do so because i dont want to repeatedly ssh into my remote host.
Unless you can parse the actual outputs of the two commands and distinguish which is which, you can't. You will need two separate ssh sessions:
ssh -i my_key user#ip command1 > command1.txt
ssh -i my_key user#ip command2 > command2.txt
You could also redirect the outputs to files on the remote machine and then copy them to your local machine:
ssh -i my_key user#ip 'command1 > command1.txt; command2 > command2.txt'
scp -i my_key user#ip:'command*.txt' .
NO, you will have to do it separately in separate command (multiple login) as already mentioned by #lanzz. To save the output in local, do like
ssh -i my_key user#ip "command1" > .\file_on_local_host.txt
In case, you want to run multiple command in a single login, then jot all your command in a script and then run that script through SSH, instead running multiple command.
It's possible, but probably more trouble than it's worth. If you can generate a unique string that is guaranteed not to be in the output of command1, you can do:
$ ssh remote 'cmd1; echo unique string; cmd2' |
awk '/^unique string$/ { output="cmd2"; next } { print > output }' output=cmd1
This simply starts printing to the file cmd1, and then changes output to the file cmd2 when it sees the unique string. You'll probably want to handle stderr as well. That's left as an exercise for the reader.
option 1. Tell your boss he's being silly. Unless, of course, he isn't and there is critical reason of needing it all in one session. For some reason such a case escapes my imagination.
option 2. why not tar?
ssh -i my_key user#ip 'command1 > out1; command2 > out2; tar cf - out*' | tar xf -
You can do this. Assuming you can set up authentication from the remote machine back to the local machine, you can use ssh to pipe the output of the commands back. The trick is getting the backslashes right.
ssh remotehost command1 \| ssh localhost cat \\\> command1.txt \; command2 \| ssh localhost cat \\\> command2.txt
Or if you aren't so into backslashes...
ssh remotehost 'command1 | ssh localhost cat \> command1.txt ; command2 | ssh localhost cat \> command2.txt'
join them using && so you can have it like this
ssh -i my_key user#ip "command1 > command1.txt && command2 > command2.txt && command3 > command3.txt"
Hope this helps
I was able to, here's exactly what I did:
ssh root#your_host "netstat -an;hostname;uname -a"
This performs the commands in order and cat'd them onto my screen perfectly.
Make sure you start and finish with the quotation marks, else it'll run the first command remotely then run the remainder of the commands against your local machine.
I have an rsa key pair to my server, so if you want to avoid credential check then obviously you have to make that pair.
I think this is what you need:
At first you need to install sshpass on your machine.
then you can write your own script:
while read pass port user ip; do
sshpass -p$pass ssh -p $port $user#$ip <<ENDSSH1
COMMAND 1 > file1
.
.
.
COMMAND n > file2
ENDSSH1
done <<____HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
____HERE
How to run multiple command on remote server using single ssh conection.
[root#nismaster ~]# ssh 192.168.122.169 "uname -a;hostname"
root#192.168.122.169's password:
Linux nisclient2 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 i686 i386 GNU/Linux
nisclient2
OR
[root#nismaster ~]# ssh 192.168.122.169 "uname -a && hostname"
root#192.168.122.169's password:
Linux nisclient2 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 i686 i386 GNU/Linux
nisclient2

How can I execute a script from my local machine in a specific (but variable) directory on a remote host?

From a previous question, I have found that it is possible to run a local script on a remote host using:
ssh -T remotehost < localscript.sh
Now, I need to allow others to specify the directory in which the script will be run on the remote host.
I have tried commands such as
ssh -T remotehost "cd /path/to/dir" < localscript.sh
ssh -T remotehost:/path/to/dir < localscript.sh
and I have even tried adding DIR=$1; cd $DIR to the script and using
ssh -T remotehost < localscript.sh "/path/to/dir/"
alas, none of these work. How am I supposed to do this?
echo 'cd /path/to/dir' | cat - localscript.sh | ssh -T remotehost
Note that if you're doing this for anything complex, it is very, very important that you think carefully about how you will handle errors in the remote system. It is very easy to write code that works just fine as long as the stars align. What is much harder - and often very necessary - is to write code that will provide useful debugging messages if stuff breaks for any reason.
Also you may want to look at the venerable tool http://en.wikipedia.org/wiki/Expect. It is often used for scripting things on remote machines. (And yes, error handling is a long term maintenance issue with it.)
Two more ways to change directory on the remote host (variably):
echo '#!/bin/bash
cd "$1" || exit 1
pwd -P
shift
printf "%s\n" "$#" | cat -n
exit
' > localscript.sh
ssh localhost 'bash -s "$#"' <localscript.sh '/tmp' 2 3 4 5
ssh localhost 'source /dev/stdin "$#"' <localscript.sh '/tmp' 2 3 4 5
# make sure it's the bash shell to source & execute the commands
#ssh -T localhost 'bash -c '\''source /dev/stdin "$#"'\''' _ <localscript.sh '/tmp' 2 3 4 5

Resources