Bash script "read" not pausing for user input when executed from SSH shell - bash

I'm new to Bash scripting, so please be gentle.
I'm connected to a Ubuntu server via SSH (PuTTY) and when I run this command, I expect the bash script that downloads and executes to allow user input and then echo that input. It seems to just write out the echo label for the input request and terminate.
wget -O - https://raw.github.com/aaronhancock/pub/master/bash/readtest.sh | bash
Any clue what I might be doing wrong?
UPDATE: This bash command does exactly what I wanted
bash <(wget -q -O - https://raw.github.com/aaronhancock/pub/master/bash/readtest.sh)

Jonathan already mentioned: bash takes its stdin from the pipe.
And therefore you cannot pipe the script into bash when you want to interactively input something. But you could use the process substitution feature of bash (assumed your login shell is a bash):
bash <(wget -O - https://raw.github.com/aaronhancock/pub/master/bash/readtest.sh)

Bash is taking stdin from the pipe, not from the terminal. So you can't pipe a script to bash and still use the "read" command for user input.
Notice that you have the same problem if you save the script to a local file and pipe it to bash:
less readtest.sh | bash

I found this also works and helps keep the data in the current scope.
eval "wget -q -O - https://raw.github.com/aaronhancock/pub/master/bash/readtest.sh"

Related

What is the difference between "bash -i myscript.sh" vs "bash myscript.sh"?

According to bash man page, it says -i is for interactive mode of shell.
I tried example code to find out what -i option does.
interactive.sh is script that needs user input, which means interactive script.
The default bash option is non-interactive mode.
But the interactive.sh runs without any problem with non-interactive mode.
It also runs well with interactive mode. It confuses me.
What is the exact usage of -i option in bash?
What is the difference between interactive and non-interactive mode in shell?
$cat interactive.sh
#!/bin/bash
echo 'Your name ?'
read name
echo "Your name is $name"
$ bash interactive.sh
Your name ?
ABC
Your name is ABC
$ bash -i interactive.sh
Your name ?
DEF
Your name is DEF
With bash -i script you are running interactive non-login shell.q
What is the exact usage of -i option in bash?
From man bash:
-i If the -i option is present, the shell is interactive.
What is the difference between interactive and non-interactive mode in shell?
There are some differences. Look at man bash | grep -i -C5 interactive | less:
An interactive shell is one started without non-option arguments (unless -s is specified) and without the -c option whose standard input and er‐
ror are both connected to terminals (as determined by isatty(3)), or one started with the -i option. PS1 is set and $- includes i if bash is
interactive, allowing a shell script or a startup file to test this state.
When an interactive login shell exits, or a non-interactive login shell executes the exit builtin command, bash reads and executes commands from
the file ~/.bash_logout, if it exists.
When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists. This may
be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc.
[...]
When bash is interactive, in the absence of any traps, it ignores SIGTERM (so that kill 0 does not kill an interactive shell), and SIGINT is
caught and handled (so that the wait builtin is interruptible). In all cases, bash ignores SIGQUIT. If job control is in effect, bash ignores
SIGTTIN, SIGTTOU, and SIGTSTP.
etc. For example bash -i -c 'echo $PS1'.

Can't run "compgen -c" from perl script

I want to check if a command exists on my machine (RedHat) inside a perl script.
Im trying to check if compgen -c contains the desired command, but running it from inside a script just gives me an empty output. Other commands work fine.
example.pl:
my $x = `compgen -c`;
print $x;
# empty output
my $y = `ls -a`;
print $y;
# .
# ..
# example.pl
Are there possible solutions for this? Or is there a better way to check for commands on my machine?
First, Perl runs external commands using /bin/sh, which is nowadays a link to a shell that is a default-of-sorts on your system. Much of the time that is bash, but not always; on RedHat it is.
This compgen is a bash builtin. One way to discover that is to run man compgen (in bash) -- and the bash manual pops up. Another way is type as Dave shows.
To use builtins we generally need to run an explicit shell for them, and they have a varied behavior in regards to whether the shell is "interactive" or not.† I can't find a discussion of that in bash documentation for this builtin but experimentation reveals that you need
my #completions = qx(bash -c "compgen -c")
The quotes are needed so to pass a complete command to a shell that will be started.
Note that this way you don't catch any STDERR out of those commands. That will come out on the terminal, and it can get missed that way. Or, you can redirect that stream in the command, by adding 2>&1 (redirect to STDOUT) at the end of it.
This is one of the reasons to use one of a number of good libraries for running and managing external commands instead of the builtin "backticks" (the qx I use above is an operator form of it.)
† This can be facilitated with -i
my #output_lines = qx(bash -i -c "command with arguments")
It's because compgen is a bash built-in command, not an external command. And when you run a command using backticks, you get your system's default shell - which is probably going to be /bin/sh, not bash.
The solution is to explicitly run bash, using the -c command-line option to give it a command to run.
my $x = `bash -c compgen -c`;
From a bash prompt, you can use type to see how a command is implemented.
$ type ssh
ssh is /usr/bin/ssh
$ type compgen
compgen is a shell builtin

Calling rsync in bash from Windows cmd

I am trying to run rsync from a batch file. The command is
SET CMD="rsync -P -rptz --delete -e 'ssh -i /root/.ssh/CERTIFICATE.pem' SOURCE_ADDRESS /mnt/c/Users/MYNAME/IdeaProjects/PROJECT/SUBFOLDER/SUBFOLDER/SUBFOLDER/SUBFOLDER/LASTFOLDER"
bash %CMD%
This works fine if I run the command after typing bash, but when I run the command from cmd with the bash precursor it says No such file or directory.
Additionally, when playing around and trying to debug bash ends up hanging... i.e. if I open bash I get no prompt, just a blinking cursor.
Any help is appreciated.
To run a command with bash you need to use the -c option
bash -c "%CMD%"
Without it the first non-option parameter will be treated as a *.sh shell script, which rsync isn't and will cause an error
If arguments remain after option processing, and neither the -c nor the -s option has been supplied, the first argument is assumed to be the name of a file containing shell commands.
Note that the cmd in Windows is not DOS even though they have a few similar commands. The rest are vastly different

How to exit multiple nested shells at once?

I have a host on which I don't have sudo. Its been setup with ksh, I'm too used to bash and chsh doesn't work. So I put in a /bin/bash as the first line in the .profile on the system.
So the result is, when I login to this system, it automatically gets me into bash. However, when I exit the shell, not suprisingly I land up in ksh.
Any tricks to avoid this?
Use exec to replace the current process (shell) with the new process (shell).
I recommend two steps:
if [ $SHELL != /bin/bash ]
then SHELL=/bin/bash exec /bin/bash --login
fi
Or, you can compress that to:
[ $SHELL != /bin/bash ] && SHELL=/bin/bash exec /bin/bash --login
You can then put the rest of your Bash profile after this. Note that probably you don't put a shebang on the first line - that will confuse things. Also, while testing, make sure you have a second connection (window) open so that you can adjust problems. It is annoying to get locked out by an erroneous profile.
You may write a script named myexit like this:
kill -1 $(ps | sed 1d | awk '{print $1}')
It sends the signal hang up (SIGHUP) to process attached to this terminal.
And would not affect any process started up by nohup.

Difference between piping a file to sh and calling a shell file

This is what was trying to do:
$ wget -qO- www.example.com/script.sh | sh
which quietly downloads the script and prints it to stdout which is then piped to sh. This unfortunately doesn't quite work, failing to wait for user input at various points, aswell as a few syntax errors.
This is what actually works:
$ wget -qOscript www.example.com/script.sh && chmod +x ./script && ./script
But what's the difference?
I'm thinking maybe piping the file doesn't execute the file, but rather executes each line individually, but I'm new to this kind of thing so I don't know.
When you pipe to sh , stdin of that shell/script will be the pipe. Thus the script cannot take e.g. user input from the console. When you run the script normally, stdin is the console - where you can enter input.
You might try telling the shell to be interactive:
$ wget -qO- www.example.com/script.sh | sh -i
I had the same issue, and after tinkering and googling this is what worked for me.
wget -O - www.example.com/script.sh | sh

Resources