Why does this command not take input from file inspite of rediection? - bash

On executing, the cisco anyconnect VPN client takes the VPN IP, password, and some other inputs from the terminal. However, instead of typing it every time, I wrote down the values in a file and tried to redirect the file into the vpn client command.
/opt/cisco/anyconnect/bin/vpn < vpndetails.txt
However, it seems that the command ignores the file redirection and still prompts for input. How is it possible? Does the code read from some other file-descriptor (not 0) and still reads it from the terminal? Is it possible?
Note: I know it isn't a good practice to store your passwords in a file, but I don't care for now.

The question "Is it possible" has the answer "yes".
The code for the anyconnect vpn probably reads /dev/tty, as explained in the comments by Chepner e.a.As a fun exercise, try this script:
#! /bin/sh
read -p "STDIN> " a
read -p "TERMINAL> " b < /dev/tty
read -p "STDIN> " c
echo "Read $a and $c from stdio and $b from the terminal"
and, for example, ls / | bash this_script.sh.
However, if you wish to use Cisco Autoconnect without passwords, you should investigate the Always On with Trusted Network detection feature and user certificates.
Writing to /dev/tty in the hope that it will be picked-up by the script does not work:
ljm#verlaine[tmp]$ ls | bash test.sh &
[3] 10558
ljm#verlaine[tmp]$ echo 'plop' > /dev/tty
plop
[3]+ Stopped ls | bash test.sh
ljm#verlaine[tmp]$ fg
ls | bash test.sh
(a loose enter is given)
Read a_file and b_file from stdio and from the terminal

Related

Can't get bash script to answer prompt with expect

My ssh access is restricted to a Google Authenticator verification code prompt. I'd like to have a script that programmatically answers that prompt.
Context:
The variable
($1) passes correctly to the script - it's the verification code.
The sshfs command works in terminal.
The prompt Verification code: comes with space and a key symbol at the end.
[EDIT] Just to make sure we don't switch to security discussions here, please note that of course I also use SSH keys, additionally to this Google Authenticator. As the Authenticator verification code expires every x seconds it does not matter that others could intercept it.
Result:
The disk mounts (I can see it with df -h), but is empty... Kind of same behavior as when the Verification code is wrong, or maybe it doesn't have the time to execute?
Shell script:
#!/bin/bash
expect_sh=$(expect -c "
spawn /usr/local/bin/sshfs username#123.123.1.123:/path/to/folder/RAID1 /Users/username/Desktop/RAID1 -o defer_permissions -o volname=RAID1
expect \"Verification code:\"
send \"$1\r\";
")
echo "$expect_sh"
Thanks
I'm afraid, I have to answer no.
There are some issues:
Having password has argument could reveal your password to other users with a simple
ps axw
Having password stored into a variable could reveal your password to other users with a simple
ps axeww
Having passord transmited via STDIN could be easy to trace.
For this and a lot of other reason, ssh (and sftp) refuse to trasnsmit secrets via arguments, variables or STDIO.
Before asking for password, there is a lot of verification, then the use of a secured dialog (working with direct TTY or with some boxes on X DISPLAY).
So using expect or passing secret as arguments is not directly possible with ssh.
But.
You could connect ssh server by using secret key:
ssh-keygen -b 4096
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:q/2fX/Hello/World/Lorem/Ipsum/Dolor/Sit/Amet user#localhst
The key's randomart image is:
+---[RSA 4096]----+
| .=o=E.o|
| .. o= o |
| o+ +=... |
| .o+ o+o. |
| . +.oS.oo |
| . *.= . ... |
| o =. oo. |
| ... +o. |
| .ooo oooo.|
+----[SHA256]-----+
Then now, you have to send your /home/user/.ssh/id_rsa.pub to be stored in authorized_keys file in the server you try to connect to (this files is located in $HOME/.ssh/ or in /etc, but could be located elsewhere, depending on sshd.conf in server).
The reason this isn't working is that you are running expect in a subshell inside the command substitution.
This would be a regular harmless useless use of echo if it weren't for the fact that you hope and expect the process to remain alive.
Just take out the variable capture and run expect as a direct descendant of your current script. If you really require your output to be available as a variable when it's done, maybe try something like
#!/bin/bash
t=$(mktemp -t gauthssh.XXXXXXXXXX) || exit
trap 'rm -f "$t"' EXIT ERROR INT HUP TERM # clean up temp file when done
expect -c "
spawn /usr/local/bin/sshfs username#123.123.1.123:/path/to/folder/RAID1 /Users/username/Desktop/RAID1 -o defer_permissions -o volname=RAID1
expect \"Verification code:\"
send \"$1\r\";
" | tee "$t"
expect_sh=$(<"$t")
You can construct a solution with screen(1)
I tested the script below. It's not especially robust, and
you'll need to make some changes according to your enviroment.
#!/bin/sh
screen -d -m -S sshtest sh -c "ssh -l postgres localhost id > output"
pass="77c94046"
ret="$(printf '\n')"
while true; do
screen -S sshtest -X hardcopy
grep -q 'password:' hardcopy.0 && break
sleep 1
done
grep -v '^$' hardcopy.0
echo -n "$passenter" | xxd
screen -S sshtest -X stuff "$pass"
screen -S sshtest -X stuff "$(printf '\r')"
sleep 1
cat output
The idea is to set up a screen running your command that redirects
its output to a local file. Then you take screen grabs in a loop and
look for your expected prompt with grep. Once you find it, use the
'stuff' command in screen to push your password into the terminal
input (i.e. screen's pty). Then you wait a bit and collect your
output if needed. This is just proof of concept code, a robust
solution would do more error checking and cleanup, and wait for
the screen to actually exit.

how dangerous is to echo passwords via pipe to passwd

I've seen in more than one discussion that using echo to pipe to passwd is dangerous because one can get arguments provided to echo from process list.
I'm currently working on a tool, that changes passwords remotely via ssh. I do use echo method because I do not have root access to use usermod or chpasswd.
cmd:
echo -e 'old\nnew\nnew' | passwd
To check how easy it is - I tried. But could not get them.
this was my method:
#!/bin/bash
filename=$1
while true
do
echo "$(pgrep -af passwd)" >> "$filename"
sleep 0.1
done
I few times changed password via echo but could not seen anything. I think that sleep 0.1 may be a problem.
How easy it is to get it from process list and therefore how insecure it is to use it this way.
It depends on whether the shell you're using has echo as a builtin, or uses an external binary (/bin/echo). If it's an external binary, it'll be run as a subprocess with the password plainly visible in its argument list (via ps, pgrep, etc). If it's a builtin, an echo command in a pipeline will run as a subprocess, but it'll be a subshell with the same visible argument list as the parent shell (i.e. it's safe).
So it's probably safe, but there are several complications to worry about. First, if you're running this on a remote computer via ssh, you don't necessarily know what its default shell is. If it's bash, you're ok. If it's dash, I think you have a problem. Second, you don't just have to worry about the remote shell and/or echo command, you have to worry about every step along the path from your local script to that remote echo command. For example, if you use:
ssh user#computer "echo -e 'old\nnew\nnew' | passwd"
...then the remote echo is probably safe (depending on the remote shell), but as #thatotherguy pointed out the password will be visible in both the remote shell's (bash -c echo -e 'foo\nbar\nbar' | passwd) and in the local ssh process's argument list.
There's another complication, BTW: echo isn't consistent about how it handles options (like -e) and escapes in the output string (see this question for an example). You're much better off using printf (e.g. printf '%s\n' "$oldpass" "$newpass" "$newpass"), which is also a builtin in bash. Or, if you're sure you're using bash, you can use here-string (<<<string) instead. It doesn't interpret escapes at all, but you can use bash's $' ' construct to interpret them:
passwd <<<"$old"$'\n'"$new"$'\n'"$new"
This doesn't involve a subprocess at all (either /bin/echo or a subshell), so no worries about the process table.
Another possibility to avoid both the problems of uncertain remote shells and the password showing up in ssh's argument list is to pass the passwords via ssh's stdin:
ssh user#computer passwd <<<"$old"$'\n'"$new"$'\n'"$new"
If you were to invoke your echo|passwd command via ssh, e.g.:
client$ ssh example.com "echo -e 'foo\nbar\nbar' | passwd"
then the process you need to look at is not passwd but the user's login shell invoked by sshd on the remote side:
server$ until pgrep -af bash | grep passwd; do true; done
8387 bash -c echo -e 'foo\nbar\nbar' | passwd

Authenticating with user/password *once* for multiple commands? (session multiplexing)

I got this trick from Solaris documentation, for copying ssh public keys to remote hosts.
note: ssh-copy-id isn't available on Solaris
$ cat some_data_file | ssh user#host "cat >/tmp/some_data_file; some_shell_cmd"
I wanted to adapt it to do more involved things.
Specifically I wanted some_shell_command to be a script sent from the local host to execute on the remote host... a script would interact with the local keyboard (e.g. prompt user when the script was running on the remote host).
I experimented with ways of sending multiple things over stdin from multiple sources. But certain things that work in in local shell don't work over ssh, and some things, such as the following, didn't do what I wanted at all:
$ echo "abc" | cat <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat < <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat <<-EOF
> echo $(</dev/stdin) #echoes: echo abc (I wanted: abc)
> EOF
# messed with eval for the above but that was a problem too.
#chepner concluded it's not feasible to do all of that in a single ssh command. He suggested a theoretical alternative that didn't work as hoped, but I got it working after some research and tweaking and documented the results of that and posted it as an answer to this question.
Without that solution, having to run multiple ssh, and scp commands by default entails being prompted for password multiple times, which is a major drag.
I can't expect all the users of a script I write in a multi-user environment to configure public key authorization, nor expect they will put up with having to enter a password over and over.
OpenSSH Session Multiplexing
    This solution works even when using earlier versions of OpenSSH where the
    ControlPersistoption isn't available. (Working bash example at end of this answer)
Note: OpenSSH 3.9 introduced Session Multiplexing over a "control master connection" (in 2005), However, the ControlPersist option wasn't introduced until OpenSSH 5.6 (released in 2010).
ssh session multiplexing allows a script to authenticate once and do multiple ssh transactions over the authenticated connection. For example, if you have a script that runs several distinct tasks using ssh, scp, or sftp, each transaction can be carried out over OpenSSH 'control master session' that refers to location of its named-socket in the filesystem.
The following one-time-password authentication is useful when running a script that has to perform multiple ssh operations and one wants to avoid users having to password authenticate more than once, and is especially useful in cases where public key authentication isn't viable - e.g. not permitted, or at least not configured.
Most solutions I've seen entail using ControlPersist to tell ssh to keep the control master connection open, either indefinitely, or for some specific number of seconds.
Unfortunately, systems with OpenSSH prior to 5.6 don't have that option (wherein upgrading them might not be feasible). Unfortunately, there doesn't seem to be much documentation or discussion about that limitation online.
Reading through old release docs I discovered ControlPersist arrived late in the game for ssh session multiplexing scene. implying there may have been an alternative way to configure session multiplexing without relying on the ControlPersist option prior to it.
Initially trying to configure persistent-sessions from command line options rather than the config parameter, I ran into the problem of the ssh session terminating prematurely, closing control connection client sessions with it, or, alternatively, the connection was held open (kept ssh control master alive), terminal I/O was blocked, and the script would hang.
The following clarifies how to accomplish it.
OpenSSH option ssh flag Purpose
------------------- --------- -----------------------------
-o ControlMaster=yes -M Establishes sharable connection
-o ControlPath=path -S path Specifies path of connection's named socket
-o ControlPersist=600 Keep shareable connection open 10 min.
-o ControlPersist=yes Keep shareable connection open indefinitely
-N Don't create shell or run a command
-f Go into background after authenticating
-O exit Closes persistent connection
ControlPersist form Equivalent Purpose
------------------- ---------------- -------------------------
-o ControlPersist=yes ssh -Nf Keep control connection open indefinitely
-o ControlPersist=300 ssh -f sleep 300 Keep control connection open 5 min.
Note: scp and sftp implement -S flag differently, and -M flag not at all, so, for those commands, the -o option form is always required.
Sketchy Overview of Operations:
Note: This incomplete example doesn't execute as shown.
ctl=<path to dir to store named socket>
ssh -fNMS $ctl user#host # open control master connection
ssh -S $ctl … # example of ssh over connection
scp -o ControlPath=$ctl … # example of scp over connection
sftp -o ControlPath=$ctl … # example of sftp over connection
ssh -S $ctl -O exit # close control master connection
Session Multiplexing Demo
(Try it. You'll like it. Working example - authenticates only once):
Running this script will probably help you understand it quicker than reading it, and it is fascinating.
Note: If you lack access to remote host, just enter localhost at the "Host...?" prompt if you want to try this demo script
#!/bin/bash # This script demonstrates ssh session multiplexing
trap "[ -z "$ctl" ] || ssh -S $ctl -O exit $user#$host" EXIT # closes conn, deletes fifo
read -p "Host to connect to? " host
read -p "User to login with? " user
BOLD="\n$(tput bold)"; NORMAL="$(tput sgr0)"
echo -e "${BOLD}Create authenticated persistent control master connection:${NORMAL}"
sshfifos=~/.ssh/controlmasters
[ -d $sshfifos ] || mkdir -p $sshfifos; chmod 755 $sshfifos
ctl=$sshfifos/$user#$host:22 # ssh stores named socket ctrl conn here
ssh -fNMS $ctl $user#$host # Control Master: Prompts passwd then persists in background
lcldir=$(mktemp -d /tmp/XXXX)
echo -e "\nLocal dir: $lcldir"
rmtdir=$(ssh -S $ctl $user#$host "mktemp -d /tmp/XXXX")
echo "Remote dir: $rmtdir"
echo -e "${BOLD}Copy self to remote with scp:${NORMAL}"
scp -o ControlPath=$ctl ${BASH_SOURCE[0]} $user#$host:$rmtdir
echo -e "${BOLD}Display 4 lines of remote script, with ssh:${NORMAL}"
echo "====================================================================="
echo $rmtdir | ssh -S $ctl $user#$host "dir=$(</dev/stdin); head -4 \$dir/*"
echo "====================================================================="
echo -e "${BOLD}Do some pointless things with sftp:${NORMAL}"
sftp -o ControlPath=$ctl $user#$host:$rmtdir <<EOF
pwd
ls
lcd $lcldir
get *
quit
EOF
Using a master control socket, you can use multiple processes without having to authenticate more than once. This is just a simple example; see man ssh_config under ControlPath for advice on using a more secure socket.
It's not quite clear what you mean by sourcing somecommand locally; I'm going to assume it is a local script that you want copied over to the remote host. The simplest thing to do is just copy it over to run it.
# Copy the first file, and tell ssh to keep the connection open
# in the background after scp completes
$ scp -o ControlMaster=yes -o ControlPersist=yes -o ControlPath=%C somefile user#host:/tmp/somefile
# Copy the script on the same connection
$ scp -o ControlPath=%C somecommand user#host:
# Run the script on the same connection
$ ssh -o ControlPath=%C user#host somecommand
# Close the connection
$ ssh -o ControlPath=%C -O exit user#host
Of course, the user could use public key authentication to avoid entering their credentials at all, but ssh would still go through the authentication process each time. Here, the authentication process is only done once, by the command using ControlMaster=yes. The other two processes reuse that connection. The last commmand, with -O exit, doesn't actually connect; it just tells the local connection to close itself.
$ echo "abc" | cat <(echo "def")
The expression <(echo "def") expands to a file name, typically something like /dev/fd/63, that names a (virtual) file containing the text "def". So lets's simplify it a bit:
$ echo "def" > def.txt
$ echo "abc" | cat def.txt
This will also prints just def.
The pipe does feed the line abc to the standard input of the cat command. But because cat is given a file name on its command line, it doesn't read from its standard input. The abc is just quietly ignored, and the cat command prints the contents of the named file -- which is exactly what you told it to do.
The problem with echo abc | cat <(echo def) is that the <() wins the "providing the input" race. Luckily, bash will allow you to supply many inputs using mulitple <() constructs. So the trick is, how do you get the output of your echo abc into the <()?
How about:
$ echo abc | cat <(echo def) <(cat)
def
abc
If you need to handle the input from the pipe first, just switch the order:
$ echo abc | cat <(cat) <(echo def)
abc
def

A script command asks for username. Can I have it stored inside the script?

I have this command inside a script.
sudo openvpn --config ....
When it is executed it asks for a username, and then for a password.
Is it possible to store inside the script the username?
In other words to avoid typing it each time this script is being executed?
(I am using Linux Ubuntu)
Use the configuration directive
auth-user-pass filename
...where filename is a file with username on the first line, and password on the second. (If you don't want the password to ever touch disk, this password can be a socket on which your script passes through user input).
For instance:
#!/bin/bash
# ^- IMPORTANT: use bash, not /bin/sh
# clearing IFS and using read -r makes trailing whitespace, literal backslashes, etc. work.
username="hardcoded value"
IFS= read -r -p "Password: " password
openvpn \
--config file.ovpn \
--auth-user-pass <(printf '%s\n' "$username" "$password")
The use of printf -- a builtin -- is important here: When calling only builtins, arguments aren't placed on the argv (and thus made accessible to anything on the system inspecting the process list).
Alternately, you can use the management-query-passwords directive [or the --management-query-passwords command-line option] to allow username and password to be requested and entered via the management socket (this protocol has its own extensive documentation).
I believe it is possible. You have use the pipe | and pipe the username to the scripts beginning if it is possible. I use a command in C++, which if I remember right changes the password for the user. It looks like:
sprintf(command, "echo -e \"%s\n%s\" | passwd %s",password,password,user);
So, since this is a shell, I would guess you could do something like:
echo -e '<username>\n<password>\n' | YourScript
In your case this may work:
echo -e `<username>\n<password>\n | sudo openvpn --config...
Of course, this assumes that there are no other things it will ask for. This is also untested. Read more on piping here.
Edit:
As mentioned by Charles Duffy the above will only work with an XSI-extended system and with programs that do not rely on TTY. I'm still not 100% certain but, I ready printf was cross compatible and script -c can be used to pipe tty. Here is the information on the script command. However, trying it in CentOS7, it looks like this would work:
printf '%s\n' "username" "password" | script -c "sudo openvpn --config..."
NOTE: I tried only piping to su and it worked after POSIXLY_CORRECT was set to 1.
Also, I think I may have misunderstood exactly what you wanted to do. If you wanted to store the username and password for the duration of the script you could do something like this:
printf "Input username:"
read username
printf "Input password:"
read -s password
printf '%s\n' "$username" "$password" | script -c "sudo openvpn --config..."

Using netcat/cat in a background shell script (How to avoid Stopped (tty input)? )

Abstract: How to run an interactive task in background?
Details: I am trying to run this simple script under ash shell (Busybox) as a background task.
myscript.sh&
However the script stops immediately...
[1]+ Stopped (tty input) myscript.sh
The myscript.sh contents... (only the relvant part, other then that I trap SIGINT, SIGHUP etc)
#!/bin/sh
catpid=0
START_COPY()
{
cat /dev/charfile > /path/outfile &
catpid = $!
}
STOP_COPY()
{
kill catpid
}
netcat SOME_IP PORT | while read EVENT
do
case $EVENT in
start) START_COPY;;
stop) STOP_COPY;;
esac
done
From simple command line tests I found that bot cat and netcat try to read from tty.
Note that this netcat version does not have -e to supress tty.
Now what can be done to avoid myscript becoming stopped?
Things I have tried so for without any success:
1) netcat/cat ... < /dev/tty (or the output of tty)
2) Running the block containing cat and netcat in a subshell using (). This may work but then how to grab PID of cat?
Over to you experts...
The problem still exists.
A simple test for you all to try:
1) In one terminal run netcat -l -p 11111 (without &)
2) In another terminal run netcat localhost 11111 & (This should stop after a while with message Stopped (TTY input) )
How to avoid this?
you probably want netcat's "-d" option, which tells it not to read from STDIN.
I can confirm that -d will help netcat run in the background.
I was seeing the same issue with:
nc -ulk 60001 | nc -lk 60002 &
Every time I queried the jobs, the pipe input would stop.
Changing the command to the following fixed it:
nc -ulkd 60001 | nc -lk 60002 &
Are you sure you've given your script as is or did you just type in a rough facsimile meant to illustrate the general idea? The script in your question has many errors which should prevent it from ever running correctly, which makes me wonder.
The spaces around the = in catpid=$! make the line not a valid variable assignment. If that was in your original script I am surprised you were not getting any errors.
The kill catpid line should fail because the literal word catpid is not a valid job id. You probably want kill "$catpid".
As for your actual question:
cat should be reading from /dev/charfile and not from stdin or anywhere else. Are you sure it was attempting to read tty input?
Have you tried redirecting netcat's input like netcat < /dev/null if you don't need netcat to read anything?
I have to use a netcat that doesn't have the -d option.
"echo -n | netcat ... &" seems to be an effective workaround: i.e. close the standard input to netcat immediately if you don't need to use it.
As it was not yet really answered, if using Busybox and -d option is not available, the following command will keep netcat "alive" when sent to background:
tail -f /dev/null | netcat ...
netcat < /dev/null and echo -n | netcat did not work for me.
Combining screen and disown-ing process work for me, as '-d' option is not a valid anymore for netcat. Tried redirecting like nc </dev/null but session ends prematurely (as I need -q 1 to make sure nc process stop as file transfer finished)
Setup Receiver side first,
on Receiver side, screen keep stdin for netcat so it won't terminated
EDIT: I was wrong, you need to enter command INSIDE screen. You'll end with no file saved, or weird binary thing flow in your terminal while attach to screen, if you redirecting nc inline of screen command. (Example, this is THE WRONG WAY: screen nc -l -p <listen port> -q 1 > /path/to/yourfile.bin)
Open screen , then press return/Enter on welcome message. new blank shell will appear (you're inside screen now)
type command: nc -l -p 1234 > /path/to/yourfile.bin
then, press CTRL + a , then press d to detach screen.
on Sender sides, disown process, quit 1s after reaching EOF
cat /path/to/yourfile.bin | nc -q1 100.10.10.10 1234 & disown

Resources