Telnet inside a shell script - shell

How can I run telnet inside a shell script and execute commands on the remote server?
I do not have expect installed on my solaris machine because of security reasons.
I also do not have the perl net::telnet module installed.
So with out using expect and perl how can I do it?
I tried the below thing but its not working.
#!/usr/bin/sh
telnet 172.16.69.116 <<!
user
password
ls
exit
!
When I execute it, this is what I am getting:
> cat tel.sh
telnet 172.16.69.116 <<EOF
xxxxxx
xxxxxxxxx
ls
exit
EOF
> tel.sh
Trying 172.16.69.116...
Connected to 172.16.69.116.
Escape character is '^]'.
Connection to 172.16.69.116 closed by foreign host.
>

Some of your commands might be discarded. You can achieve finer control with ordinary script constructs and then send required commands through a pipe with echo. Group the list of commands to make one "session":-
{
sleep 5
echo user
sleep 3
echo password
sleep 3
echo ls
sleep 5
echo exit
} | telnet 172.16.65.209

I had the same issue...however, at least in my environment it turned out being the SSL Certificate on the destination server was corrupted in some way and the server team took care of the issue.
Now, what I'm trying to do is to figure out how to get a script to run the exact same thing you're doing above except I want it to dump out the exact same scenario above into a file and then when it encounters a server in which it actually connects, I want it to provide the escape character (^]) and go on to the next server.

Related

it is possible to use an if without executing its condition

so im trying to make an if statement to tell me if an sftp connection was sucessfull or failed, and if its a sucess i want to run a piece of code that automates an sftp download that ive already made.
My problem is that this if statement executes this sftp connection, and then prompts me for a password and stalls the rest of the code.
i wanted to do something like this
if ( sftp -oPort=23 user#server )
then
expect <<-EOF
spawn sftp -oPort=23 user#server
.....
I want to know if its possible for me to make the if statement not execute the sftp connection and then not prompt me , maybe execute it on the background or something.
I would appreciate if someone could tell me if what im asking is possible, or propose a better solution to what im trying to do, thanks
You cannot not-execute a command and then react on the return value of the executed command (because this is what you really want to do: check if you can run sftp successful, and if so do a "proper" run; but you'll never know whether it can run successfull without running it).
So the main question is, what it is what you actually want to test.
If you want to test whether you can do a full sftp connection (with all the handshaking and what not), you could try running sftp in batch-mode (which is handily non-interactive).
E.g. the following runs an sftp session, only to terminate it immediately with a bye command:
if echo bye | sftp -b - -oPort=23 user#server ; then
echo "sftp succeeded"
fi
This will only succeed if the entire sftp session works (that is: you pass any key checks; you can authenticate, ...).
If the server asks you for a password, it will fail to authenticate (being non-interactive), and you won't enter the then body.
If you only want to check whether something is listening on port 23, you can use netcat for this:
if netcat -z server 23; then
echo "port:32 is open"
fi
This will succeed whenever it can successfully bind to port 23 on the server. It doesn't care whether there's an sftp daemon running, or (more likely) a telnet daemon.
You could also do some minimal test whether the remote server looks like an SSH/SFTP server: ssh servers usually greet you with a string indicating that they indeed speak ssh: something like "SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.4".
With this information you can then run:
if echo QUIT | netcat server 23 | grep SSH; then
echo "found an ssh server"
fi

How can I start an ssh session with a script without redirecting stdin?

I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.

Optimistic way to test port before executing sftp

I have a bash script which is doing very plain sftp to transfer data to production and uat servers. See my code below.
if [ `ls -1 ${inputPath}|wc -l` -gt 0 ]; then
sh -x wipprod.sh >> ${sftpProdLog}
sh -x wipdev.sh >> ${sftpDevLog}
sh -x wipdevone.sh >> ${sftpDevoneLog}
fi
sometimes the UAT server may go down. In those cases the number of scripts hanged are getting increased. If it reaches the user max. number of process the other scripts also getting affected. So I am thinking before executing each of the above script i have to test the port 22 availability on the destination server. Then I can execute the script.
Is this the right way? If yes what is the optimistic way to do that? If no what else is the best approach to avoid unnecessary sftp connection when destination not available? Thanks in advance.
Use sftp in batch mode together with ConnectTimeout option explicitely set. Sftp will take care about up/down problems by itself.
Note, that ConnectTimeout should be slightly higher if your network is slow.
Then put sftp commands into your wip*.sh backup scripts.
If UAT host is up:
[localuser#localhost tmp]$ sftp -b - -o ConnectTimeout=1 remoteuser#this_host_is_up <<<"put testfile.xml /tmp/"; echo $?
sftp> put testfile.xml /tmp/
Uploading testfile.xml to /tmp/testfile.xml
0
File is uploaded, sftp exits with exit code 0.
If UAT host is down, sftp exits wihin 1 second with exit code 255.
[localuser#localhost tmp]$ sftp -b - -o ConnectTimeout=1 remoteuser#this_host_is_down <<<"put testfile.xml /tmp/"; echo $?
ssh: connect to host this_host_is_down port 22: Connection timed out
Couldn't read packet: Connection reset by peer
255
It sounds reasonable - if the server is inaccessible you want to immediately report an error and not try and block.
The question is - why does the SFTP command block if the server is unavailable? If the server is down, then I'd expect the port open to fail almost immediately and you need only detect that the SFTP copy has failed and abort early.
If you want to detect a closed port in bash, you can simply as bash to connect to it directly - for example:
(echo "" > /dev/tcp/remote-host/22) 2>/dev/null || echo "failed"
This will open the port and immediately close it, and report a failure if the port is closed.
On the other hand, if the server is inaccessible because the port is blocked (in a firewall, or something, that drops all packets), then it makes sense for your process to hang and the base TCP test above will also hang.
Again this is something that should probably be handled by your SFTP remote copy using a timeout parameter, as suggested in the comments, but a bash script to detect blocked port is also doable and will probably look something like this:
(
(echo "" > /dev/tcp/remote-host/22) &
pid=$!
timeout=3
while kill -0 $pid 2>/dev/null; do
sleep 1
timeout=$(( $timeout - 1 ))
[ "$timeout" -le 0 ] && kill $pid && exit 1
done
) || echo "failed"
(I'm going to ignore the ls ...|wc business, other than to say something like find and xargs --no-run-if-empty are generally more robust if you have GNU find, or possibly AIX has an equivalent.)
You can perform a runtime connectivity check, OpenSSH comes with ssh-keyscan to quickly probe an SSH server port and dump the public key(s), but sadly it doesn't provide a usable exit code, leaving parsing the output as a messy solution.
Instead you can do a basic check with a bash one-liner:
read -t 2 banner < /dev/tcp/127.0.0.1/22
where /dev/tcp/127.0.0.1/22 (or /dev/tcp/hostname/ssh) indicates the host and port to connect to.
This relies on the fact that the SSH server will return an identifying banner terminated with CRLF. Feel free to inspect $banner. If it fails after the indicated timeout read will receive SIGALARM (exit code 142), and connection refused will result in exit code 1.
(Support for /dev/tcp and network redirection is enabled by default since before bash-2.05, though it can be disabled explicitly with --disable-net-redirections or with --enable-minimal-config at build time.)
To prevent such problems, an alternative is to set a timeout: with any of the ssh, scp or sftp commands you can set a connection timeout with the option -o ConnectTimeout=15, or, implicitly via ~/.ssh/config:
Host 1.2.3.4 myserver1
ConnectionTimeout 15
The commands will return non-zero on timeout (though the three commands may not all return the same exit code on timeout). See this related question: how to make SSH command execution to timeout
Finally, if you have GNU parallel you may use its sem command to limit concurrency to prevent this kind of problem, see https://unix.stackexchange.com/questions/168978/limit-maximum-number-of-concurrent-scp-processes-running-on-a-host .

BASH: keep connection alive

I have the following scenario:
I use netcat to connect to a host running telnet server on port 23, I log in using provided username and password, issue some commands, after which I need to do fairly complex analysis of the provided output. Naturally, expect comes to mind, with a script like this:
spawn nc host 23
send "user\r"
send "pass\r"
send "command\r"
expect EOF
then, it is executed with expect example.scr >output.log, so the output file can be parsed. The parser is 150+ lines of bash code that executes under 2 seconds, and makes a decision what command should be executed next. Thus, it replaces "command" with "command2", and executes the expect script again, like this:
sed -i '/send "command\r"/send "command2\r"/' example.scr
expect example.scr >output.log
Obviously, it is not needed to re-establish telnet connection and perform log in process all over again, just to issue a single telnet command after 2 seconds of processing. A conclusion can be made, that telnet session should be kept alive as a background process, so one could freely talk to it at any given time. Naturally, using named pipes comes to mind:
mkfifo in
mkfifo output.log
cat in | nc host 23 >output.log &
echo -e "user\npass\ncommand\n" >in
cat output.log
After the file is written to, EOF causes the named pipe to close, thus terminating the telnet session. I was thinking what kind of eternal process could be piped to netcat so it can be used as telnet relay to host. I came up with a very silly idea, but it works:
nc -k -l 666 | nc host 23 >output.log &
echo -e "user\npass\ncommand\n" | nc localhost 666
cat output.log
The netcat server is started with k(eep alive), listening on port 666, and any data stream is redirected to the netcat telnet client connected to the host, while the entire conversation is dumped to output.log. One can now echo telnet commands to nc localhost 666, and read the result from output.log.
One should keep in mind that the expect script can be easily modified to accommodate SSH and even serial console connection, just by spawning ssh or socat instead of netcat. I never liked expect because it forces a use of another scripting language within bash, requires tcl libraries, and needs to be compiled for the embedded platforms, while netcat is a part of busybox and readily available everywhere.
So, the question is - could this be done in a simpler way? I'd put my bet on having some sort of link between console and TCP socket. Any suggestions are appreciated.
How about using like a file descriptor?
exec 3<>/dev/tcp/host/port
while true; do
echo -e "user\npass\ncommand" >&3
read_response_generate_next_command <&3 >&3
# if no more commands, break;
done
exec 3>&-

Auto SSH and execute script

I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script.
Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script?
I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.
Since the script is on the other machines, you can just have ssh run the command for you there:
ssh $hostname my_script >> results_file
When you specify a command like that, it's executed instead of the login shell.
I'll leave it up to you to figure out how to loop over hostnames!
One trick you'll need to use is setting up pre-authorized keys for each host. Then you can run a script on one host, running something like 'ssh hostname command > log.hostname'
This script might be what you are looking for: It allows you to execute one command (which can be your script) on multiple remote machines via ssh. It's a simple script with bash source available, so you should be able to customize it to your needs:
http://www.heinzi.at/projects/upgradebest.sh/
Yes you can
You need actually 2 small scripts as following:
remote_ssh.sh ( which takes as first argument the name of the machine and the rest of the arguments are your script that you want to execute with his own arguments)
Example : remote_ssh.sh node5 "echo hello world"
remote_ssh.sh as following:
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
REST_ARG=${ALL_ARG##$FST_ARG}
echo "Executing REMOTE COMMAND ON $FST_ARG"
/usr/bin/ssh $FST_ARG bash execute_ssh_command.sh $FST_ARG pwd $REST_ARG
execute_ssh_command.sh as following :
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
DIR_ARG=$2
REM_ARG="$1 $2"
REST_ARG=${ALL_ARG##$REM_ARG}
cd $DIR_ARG
$REST_ARG
of course you have to get this 2 scripts in your path of all your nodes ( maybe ~/bin/ )
Hope that it's helpful

Resources