I want to run openssl and have it begin with the following commands sent to the server:
t authenticate <dynamically generated base64 string from calling script>
t select Inbox
Then from there take input from stdin. I'm very ignorant in shell scripting and the openssl toolkit, and I certainly don't see how to do this simply with piping / redirecting stdin unless perhaps I tried setting up a file that was simultaneously drawing from stdin itself, or such.
I'm not exactly sure the technologies openssl uses to read its input. For example the following:
$ echo "t login testacct#yahoo.com password" | openssl s_client -connect imap.mail.yahoo.com:993
Does not do the same thing as
openssl s_client -connect imap.mail.yahoo.com:993
# openssl dialogue opens...
C: t login testacct#yahoo.com password
S: t NO [AUTHENTICATIONFAILED] Incorrect username or password. (#YSH002)
I imagine openssl is opening a new shell session (I'm weak in my understanding here) and it does not pass its arguments from stdin to the inner shell it creates.
I'd recommend splitting the problem into two scripts:
Firstly you have one script that echoes the initial commands that you want to send and then reads from stdin and writes to stdout. Like this (call it script1.sh for instance):
#!/bin/bash
echo "first command"
echo "second command"
while read x
do
echo "$x"
done
The second script then just bundles the arguments to openssl so you don't have to keep typing them (call this script2.sh for instance. Note that as with script1.sh above, you should have the #!/bin/bash on the first line to tell the OS that it's a bash script.
then you can just type:
script1.sh | script2.sh
and you'll get the first two lines passed to openssl and then everything you type will get passed after that. If you want to always finish with a few commands you can add them after the while loop in script1.sh.
You terminate the whole thing with Ctrl-D
If openssl echoes the input you type then you will get the lines you type in shown twice (which is a bit irritating). In that case the "-s" argument to "read" will suppress the first line (useful for typing passwords for instance)
Note that this solution is similar to the solution suggested earlier with the temporary file and the tail -f but it avoids the need for a temporary file and everything is done in a single line.
The problem with the solution given in the question is that stdin to the openssl command is closed when the 'echo "t login ..."' command finishes and this will generally cause programs to exit. With the solution given here the pipe connects the stdout of the first script to the stdin of the second and everything typed into read will get passed on to openssl
None of these solutions return control of stdin to the user. This should pass first command and second command to openssl and then read stdin:
cat <<EOF - | openssl ....
first command
second command
EOF
The basic SSL/TLS connection to an SSL-enabled IMAP server can be established through s_client:
openssl s_client -connect imapserver.example.com:143 -starttls imap
Note the trailing -starttls imap: openssl "knows" how to tell the IMAP server that it would like to move from the plain-text connection (as you would get with telnet) to the SSL-secured.
After this, openssl's job is done, and you need to speak proper IMAP to the server, including authentification!
You can change your script to write the commands to a file, and then use tee -a to redirect stdin to that same file. Let me show you an example:
jweyrich#pharao:~$ echo "command1" > cmds
jweyrich#pharao:~$ tee -a cmds > /dev/null
command2
command3
^C
In the mean time, I was running tail -f cmds in another tty:
jweyrich#pharao:~$ tail -f cmds
command1
command2
command3
This will turn that file into the single source you have to read and process.
I'd like to add that you can use Nick's solution as one-line script:
$ sh -c 'echo "first command"; echo "second command"; while read x; do echo "$x"; done' | whatever
Related
To avoid the XY-problem issue:
X problem: given an interactive TUI program, how can I wrap it into a non-interactive one (a batch script which accepts some inputs in specific format from stdin and output to stdout in specific format) in a suggested way.
Y problem: I'm trying to use expect to wrap passphrase2pgp into a batch script, but met some trouble when dealing with STDIN & STDOUT with pipe.
(PS: I prefer some advice for this kind of problems than a workaround of specific example, since it's not a complicated job, I think there must be some simple and direct way to achieve it, no tricks and workaround. But I'm not sure whether the implementation details of the be wrapped programs will make things very different, anyway, pointing this out -- if it is the fact -- will also be very helpful)
Suppose I want to wrap some interactive program (e.g. passphrase2pgp, or ssh-keygen which is more well-known) into an non-interactive script named my-keygen, so that I can use it like
printf pwd | my-keygen | gpg --import
In this case, my-keygen is expected to receive password from piped STDIN, and output the private key to STDOUT, which will be piped to gpg --import.
But use passphrase2pgp directly like printf pwd | passphrase2pgp --uid alice --repeat 0 | gpg --import doesn't work at all, since it will still prompt for the passphrase interactivly.
This brings me to use expect, following is my-keygen using expect:
(if you have nix installed, change "expect" in shebang to "nix-shell" so you can run it directly without install passphrase2pgp)
#!/usr/bin/env expect
#! nix-shell -p passphrase2pgp -p expect -i expect
spawn passphrase2pgp --uid alice --repeat 0 --armor
expect "passphrase:"
interact
But this script still doesn't work, when used like printf pwd | my-keygen, it exit too early without printing anything out.
And I find it works when I answer questions from tty, it just does not work if the STDIN is from pipe.
I searched a lot and found something useful:
How to send standard input through a pipe
Can expect scripts read data from stdin?
How can I pipe initial input into process which will then be interactive?
which let me know that there is an EOF in the piped content which will cause the interactive program exit without finishing its job.
A minimal example to reproduce what happened is:
create a file named test with following content
#!/usr/bin/env expect
spawn bash
interact
run chmod +x ./test to make it executable
run echo 'echo hello' | ./test and confirm that hello is not printed as expected
run echo 'echo hello' | bash and confirm that hello is printed as expected this way
So is there any solution to make my-keygen work as expected? And to be more general, what is the suggested way to such a thing?
(edited 2022/10/11)
The exit too early issue of the expect version is fixed (thanks to many helpful answers):
just read lines in a while loop and send them one-by-one, and NOT use the interact command of expect at the end of the script.
But in this way, the output of the spawned process is not printed to stdout, if we need to pipe its STDOUT to another program, like gpg --import, then we need to control the STDOUT of my-keygen to only print the armored key, and no extra stuff. It will be better if your solution shows how to achieve that.
(I already received some suggestion that expect is not a good choice for this purpose - thanks #pynexj - so feel free to post your answer if you have better solution which is not based on expect.)
Use while read construction like this:
#!/bin/bash
while read -r input; do
opts+=("$input")
done
echo "${opts[#]}"
Testing:
$ echo -e "myinfo1\nmyinfo2\nmysecret3" | ./test
myinfo1 myinfo2 mysecret3
Using with ssh-keygen:
$ ssh-keygen --help
unknown option -- -
usage: ssh-keygen [-q] [-b bits] [-C comment] [-f output_keyfile] [-m format]
[-t dsa | ecdsa | ecdsa-sk | ed25519 | ed25519-sk | rsa]
[-N new_passphrase] [-O option] [-w provider]
ssh-keygen -p [-f keyfile] [-m format] [-N new_passphrase
...
ssh-keygen -p -f "${opts[0]}" -m "${opts[1]}" -N "${opts[2]}"
The way you're echoing your string, the newlines are being sent as literal characters and not being interpreted as newlines so there is only one line of stdin being passed to expect that gets completely consumed the first time you read stdin. There are a couple ways to fix this:
Use echo -e
echo -e "myinfo1\nmyinfo2\nmysecret3" | my-keygen
ANSI-C Quoting
echo "myinfo1"$'\n'"myinfo2"$'\n'"mysecret3" | my-keygen
Using actual newlines
echo "myinfo1
myinfo2
mysecret3" | my-keygen
A here-document
my-keygen <<EOF
myinfo1
myinfo2
mysecret3
EOF
When writing a bash script. Sometimes you are running a command which opens up another program such as npm, composer.. etc. But at the same time you need to use read in order to prompt the user.
Inevitable you hit this kind of error:
read: read error: 0: Resource temporarily unavailable
After doing some research there seems to be a solution by piping the STDIN of those programs which manipulate the STDIN of your bash script to /dev/null.
Something like:
npm install </dev/null
Other research has shown it has something to do with the fact that the STDIN is being set to some sort of blocking/noblocking status and it isn't being reset after the program finishes.
The question is there some sort of fool proof, elegant way of reading user prompted input without being affected by those programs that manipulate the STDIN and not having to hunt which programs need to have their STDIN redirected to /dev/null. You may even need to use the STDIN of those programs!
Usually it is important to know what input the invoked program expects and from where, so it is not a problem to redirect stdin from /dev/null for those that shouldn't be getting any.
Still, it is possible to do it for the shell itself and all invoked programs. Simply move stdin to another file descriptor and open /dev/null in its place. Like this:
exec 3<&0 0</dev/null
The above duplicates stdin file descriptor (0) under file descriptor 3 and then opens /dev/null to replace it.
After this any invoked command attempting to read stdin will be reading from /dev/null. Programs that should read original stdin should have redirection from file descriptor 3. Like this:
read -r var 0<&3
The < redirection operator assumes destination file descriptor 0, if it is omitted, so the above two commands could be written as such:
exec 3<&0 </dev/null
read -r var <&3
When this happens, run bash from within your bash shell, then exit it (thus returning to the original bash shell). I found a mention of this trick in https://github.com/fish-shell/fish-shell/issues/176 and it worked for me, seems like bash restores the STDIN state. Example:
bash> do something that exhibits the STDIN problem
bash> bash
bash> exit
bash> repeat something: STDIN problem fixed
I had a similar issue, but the command I was running did need a real STDIN, /dev/null wasn't good enough. Instead, I was able to do:
TTY=$(/usr/bin/tty)
cmd-using-stdin < $TTY
read -r var
or combined with spbnick's answer:
TTY=$(/usr/bin/tty)
exec 3<&0 < $TTY
cmd-using-stdin
read -r var 0<&3`
which leaves a clean STDIN in 3 for you to read and 0 becomes a fresh stream from the terminal for the command.
I had the same problem. I solved by reading directly from tty like this, redirecting stdin:
read -p "Play both [y]? " -n 1 -r </dev/tty
instead of simply:
read -p "Play both [y]? " -n 1 -r
In my case, the use of exec 3<&0 ... didn't work.
Clearly (resource temporarily unavailable is EAGAIN) this is caused by programs that exits but leaves STDIN in nonblocking mode.
Here is another solution (easiest to script?):
perl -MFcntl -e 'fcntl STDIN, F_SETFL, fcntl(STDIN, F_GETFL, 0) & ~O_NONBLOCK'
The answers here which suggest using redirection are good. Fortunately, Bash's read should soon no longer need such fixes. The author of Readline, Chet Ramey, has already written a patch: http://gnu-bash.2382.n7.nabble.com/read-may-fail-due-to-nonblocking-stdin-td18519.html
However, this problem is more general than just the read command in Bash. Many programs presume stdin is blocking (e.g., mimeopen) and some programs leave stdin non-blocking after they exit (e.g., cec-client). Bash has no builtin way to turn off non-blocking input, so, in those situations, you can use Python from the command line:
$ python3 -c $'import os\nos.set_blocking(0, True)'
You can also have Python print the previous state so that it may be changed only temporarily:
$ o=$(python3 -c $'import os\nprint(os.get_blocking(0))\nos.set_blocking(0, True)')
$ somecommandthatreadsstdin
$ python3 -c $'import os\nos.set_blocking(0, '$o')'
Okay, so I've recently discovered the magic of here documents for feeding stdin style lines into interactive commands. However, I'm trying to use this with SSH to execute a bunch of commands on a remote server, but I also need to pipe in some actual input, before executing the extra commands, to confound matters further I also need to get some results back ;)
Here's what I'm trying to use:
#!/bin/sh
RESULT=$(find -type f "$PATH" | gzip | ssh "$HOST" <<- 'REMOTE_SYNC'
cat > "/tmp/.temp_file"
# Do something with /tmp/.temp_file
REMOTE_SYNC
Is this actually correct? Part of the problem I'm having as well is that I need to pipe the data to that file in /tmp, but I should really be generating a randomly named temp file, but I'm not sure how I could do that, assign the name to a variable (so I can get back to it) and still send stdin into it.
I may also extract the find | gzip part to a separate command run locally first, as the gzipped file will likely be small enough that sending it when ready will result in a much shorter SSH connection then sending it as it's generated, but it still doesn't get around the fact that I need to be able to provide both stdin and my extra commands to SSH.
No, you can't do it like this. Both heredoc and the piped input compete for stdin, and only one wins. Look at this example:
echo test | cat << EOF
TEST
EOF
What will this print? test, TEST or both? It prints TEST, so the heredoc wins (at least in bash).
You don't really need this anyway. Luckily ssh takes a command argument, which will be passed on to the shell on the remote host, so you can just use your command as a string here. So something like this:
echo TEST | ssh user#host 'cat > tempfile; cat tempfile; rm tempfile'
would work (althoug it doesn't make much sense), the output of the left side commands is piped through ssh to the remote host and supplied as stdin there.
If you want the data to be compressed when sending it through ssh, you can just enable compression using the -C option.
edit:
Using linebreaks inside a string is perfectly fine, so this works fine too:
echo TEST | ssh user#host '
cat > tempfile
cat tempfile
rm tempfile
'
The only difference to a heredoc would be that you have to escape quotes.
If you use something like echo TEST | ssh user#host "$(<script.sh)" you can write everything into a file...
I have following script scanning through a file, each line is hostname of a remote node.
echo -e "node1\nnode2\nnode3" > tempfile
while read aline; do
echo "#$aline";
done < tempfile
this produces #node1 #node2 and #node3 correctly in three lines. But when I add ssh inside the loop, as follows
while read aline; do
echo "#$aline";
ssh $aline 'jps';
done < tempfile
The loop will break after first invocation of ssh and prints only #node1 (without #node2 and `#node3).
I am asking what happened behind the scene (it looks like undefined behaviour)? And how should one realise the same functionality without breaking the while loop.
SSH is doing something with stdin (which is redirected from tempfile) and messing up the reads. Try redirecting stdin.
ssh -n $aline 'jps'
From man ssh:
-n Redirects stdin from /dev/null (actually, prevents reading from stdin).
I'm having problems understanding what's going on in the following situation. I'm not familiar with UNIX pipes and UNIX at all but have read documentation and still can't understand this behaviour.
./shellcode is an executable that successfully opens a shell:
seclab$ ./shellcode
$ exit
seclab$
Now imagine that I need to pass data to ./shellcode via stdin, because this reads some string from the console and then prints "hello " plus that string. I do it in the following way (using a pipe) and the read and write works:
seclab$ printf "world" | ./shellcode
seclab$ hello world
seclab$
However, a new shell is not opened (or at least I can't see it and iteract with it), and if I run exit I'm out of the system, so I'm not in a new shell.
Can someone give some advice on how to solve this? I need to use printf because I need to input binary data to the second process and I can do it like this: printf "\x01\x02..."
When you use a pipe, you are telling Unix that the output of the command before the pipe should be used as the input to the command after the pipe. This replaces the default output (screen) and default input (keyboard). Your shellcode command doesn't really know or care where its input is coming from. It just reads the input until it reaches the EOF (end of file).
Try running shellcode and pressing Control-D. That will also exit the shell, because Control-D sends an EOF (your shell might be configured to say "type exit to quit", but it's still responding to the EOF).
There are two solutions you can use:
Solution 1:
Have shellcode accept command-line arguments:
#!/bin/sh
echo "Arguments: $*"
exec sh
Running:
outer$ ./shellcode foo
Arguments: foo
$ echo "inner shell"
inner shell
$ exit
outer$
To feed the argument in from another program, instead of using a pipe, you could:
$ ./shellcode `echo "something"`
This is probably the best approach, unless you need to pass in multi-line data. In that case, you may want to pass in a filename on the command line and read it that way.
Solution 2:
Have shellcode explicitly redirect its input from the terminal after it's processed your piped input:
#!/bin/sh
while read input; do
echo "Input: $input"
done
exec sh </dev/tty
Running:
outer$ echo "something" | ./shellcode
Input: something
$ echo "inner shell"
inner shell
$ exit
outer$
If you see an error like this after exiting the inner shell:
sh: 1: Cannot set tty process group (No such process)
Then try changing the last line to:
exec bash -i </dev/tty