Content of my remote file:
#!/bin/sh
read foo && echo "$foo"
What I’m doing locally in my terminal:
$ curl -fLSs http://example.com/my-remote-file.sh | sh
What I’m expecting:
The downloaded script should prompt the user to enter something and wait for the users input.
What actually happens:
The downloaded script skips the read command and continues executing.
Question:
How can I prompt the user to input something from a script that is downloaded via curl? I know that sh <(curl -fLSs http://example.com/my-remote-file.sh) is working, but this is not POSIX compliant and I’d like to achieve that.
The problem is that stdin is reading from curl, rather than the keyboard. You need to change your script to instruct it to read specifically from the terminal, like this:
read input < /dev/tty
echo $input
Related
This question already has answers here:
While loop stops reading after the first line in Bash
(5 answers)
Closed 2 years ago.
I made a bash script for my personal usage which sets up selenium webdriver with appropriate options . Here is its raw link - https://del.dog/raw/edivamubos
If i execute this script using curl after writing it to a file first like..
curl https://del.dog/raw/edivamubos -o test.sh && \
chmod u+x test.sh && \
bash test.sh
The script works perfectly as its intended to work
But usually i like to execute scripts directly using curl , so when i do..
curl https://del.dog/raw/edivamubos | bash
The script works very weirdly , it keeps repeating line 22,23 and 29 infinitely on loop. i couldnt beleive it as first so i tested this 3,4 times and can confirm it.
Now
what is the reason for same script acting differently in both cases ?
How do i fix it ( ie make it work correctly even after executing it directly without writing to a file )
Edit -
If someone want they can quickly test this in google colab ( in case someone intending to test but don't want to install any packages locally ) . I am mentioning this thing because you won't be able to reproduce this properly in any bash IDE.
When you pipe the script to bash, this command (line 24):
read -p "Enter your input : " input
reads the next line (i.e. line 25, case $input in) because bash's stdin is connected to curl's stdout, and read reads from the same descriptor as bash.
To avoid that, the developer can change the script so that all input is read from /dev/tty (i.e. the controlling terminal). E.g.:
read -p 'prompt' input </dev/tty
Or the user can use one of the below, so that read reads from the terminal, not the descriptor it was read from.
bash -c "$(curl link)"
bash <(curl link)
I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.
I'm creating a script that I want people to run with
curl -sSL install.domain.com | bash
As RVM, oh-my-zsh and many others does,
However, I'm having issues because my script is interactive (it uses read and select, and the user is not being prompted, the script just skip those steps as is being executed with a |.
I've tried adding {} to my whole code.
I was thinking in ask the script to download himself again and put in tmp folder, and execute from there, not sure if that will work.
You can explicitly tell read to read from the terminal using
read var < /dev/tty
The solution i found is ask user to run it like:
bash <( curl -sSL install.domain.com )
This way script is passed as an argument and standard input remains untouched.
This is a non-problem, as users should not be executing code directly from the stream. The user should be downloading the code to a file first, then verifying that the script they receive has, for example, an MD5 hash that matches the hash you provide on your website. Only after confirming that the script they receive is the script you sent should they execute it.
$ curl -sSL install.domain.com > installer.sh
$ md5sum installer.bash # Does the output match the hash posted on install.domain.com?
$ bash installer.bash
I know how to run shell scripts pretty easily.
I would have my file say:
#!/bin/zsh
python somefile.py
but the file, somefile in this case requires an input. example:
What is the password?
Can you write a script which will enter that password, or have pause while it waits for input?
My goal overall, is to run a tunneling python script to build a connection and watch a port, pull some data through the tunnel, and then close the python script.
Ideally: I want to have this shellscript option somefile.py in an alternate terminal, as i dont know if i can just no-hup until it is no longer needed then kill the process.
First thing is first. Can you have script which will do something like:
#!/bin/zsh
python somefile.py
echo admin12345
or something similar to auto enter info?
Assuming the python script reads from stdin, just do "echo admin12345 | somefile.py".
Usually, however, that's not the case, and scripts that read passwords will want to read from a terminal, not just any stdin.
In that case, look into "expect".
It worked for me with java and python examples:
#!/bin/bash
echo "1234" | python somefile.py
Just give some permissions to your script chmod +x yourscript.sh, and run it ./yourscript.sh.
I've a Bash script using some Heroku client commands, e.g., heroku create. Those commands are written in Ruby and include calls to IO::gets to gather the user's name and password. But, if they are called from within a Bash script, it seems to advance to the last gets. For example, in the case where the Heroku client is going to ask for email and password, the first is skipped and the user can only enter their password:
Enter your Heroku credentials.
Email: Password: _
Is there a way to fix or jury-rig the Bash script to prevent this from happening (it does not happen when the exact same command is run by hand from the command line.) Or something that must be changed in the Ruby?
[UPDATE - Solution]
I wasn't able to solve the problem above, but #geekosaur's answer to another question put me on the right track to making the terminal "available" to the Ruby script running inside the Bash. I changed all:
curl -s http://example.com/bash.sh | bash
and
bash < <(curl -s http://example.com/bash.sh)
to be
bash <(curl -s http://example.com/bash.sh)
Try to reopen stdin on the controlling terminal device /dev/tty.
exec 0</dev/tty # Bash
$stdin.reopen(File.open("/dev/tty", "r")) # Ruby
# Bash examples
: | ( read var; echo $var; )
: | ( read var </dev/tty; echo $var; )
: | ( exec 0</dev/tty; read var; echo $var; )