read command doesn't wait for input - bash

I have problem executing a simple script in bash. The script is like this:
#! /bin/sh
read -p 'press [ENTER] to continue deleting line'
sudo sed -ie '$d' /home/hpccuser/.profile
and when I execute the script with ./script the output is like this:
press [ENTER] to continue deleting line./script: 3: read: arg count
[sudo] password for user
I run the read command directly in terminal (copy and paste from script to terminal) and it works fine; it waits for an ENTER to be hit (just like a pause).

Because your script starts with #!/bin/sh rather than #!/bin/bash, you aren't guaranteed to have bash extensions (such as read -p) available, and can rely only on standards-compliant functionality.
See the relevant standards document for a list of functionality guaranteed to be present in read.
In this case, you'd probably want two lines, one doing the print, and the other doing the read:
printf 'press [ENTER] to continue deleting...'
read _

You can do this with echo command too!:
echo "press [ENTER] to continue deleting line"
read continue

If you use pipe to redirect contents to your function/script it will run your command in a sub-shell and set stdin (0) to a pipe, which can be checked by
$ ls -l /dev/fd/
lr-x------ 1 root root 64 May 27 14:08 0 -> pipe:[2033522138]
lrwx------ 1 root root 64 May 27 14:08 1 -> /dev/pts/11
lrwx------ 1 root root 64 May 27 14:08 2 -> /dev/pts/11
lr-x------ 1 root root 64 May 27 14:08 3 -> /proc/49806/fd
And if you called read/readarray/... command in that function/script, the read command would return immediately whatever read from that pipe as the stdin has been set to that pipe rather than the tty, which explained why read command didn't wait for input. To make read command wait for input in such case you have to restore stdin to tty by exec 0< /dev/tty before the call to read command.

read -p " Ici mon texte " continue
it works on raspbian

Seems I'm late to the party, but echo -n "Your prompt" && sed 1q does the trick on a POSIX-compliant shell.
This prints a prompt, and grabs a line from STDIN.
Alternatively, you could expand that input into a variable:
echo -n "Your prompt"
YOUR_VAR=$(sed 1q)

Related

What is the meaning of these redirects in Bash: "8>&1" "9>&2" and "1>&9"? [duplicate]

Can someone tell me why this does not work? I'm playing around with file descriptors, but feel a little lost.
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
The first three lines run fine, but the last two error out. Why?
File descriptors 0, 1 and 2 are for stdin, stdout and stderr respectively.
File descriptors 3, 4, .. 9 are for additional files. In order to use them, you need to open them first. For example:
exec 3<> /tmp/foo #open fd 3.
echo "test" >&3
exec 3>&- #close fd 3.
For more information take a look at Advanced Bash-Scripting Guide: Chapter 20. I/O Redirection.
It's an old question but one thing needs clarification.
While the answers by Carl Norum and dogbane are correct, the assumption is to change your script to make it work.
What I'd like to point out is that you don't need to change the script:
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
It works if you invoke it differently:
./fdtest 3>&1 4>&1
which means to redirect file descriptors 3 and 4 to 1 (which is standard output).
The point is that the script is perfectly fine in wanting to write to descriptors other than just 1 and 2 (stdout and stderr) if those descriptors are provided by the parent process.
Your example is actually quite interesting because this script can write to 4 different files:
./fdtest >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt
Now you have the output in 4 separate files:
$ for f in file*; do echo $f:; cat $f; done
file1.txt:
This
file2.txt:
is
file3.txt:
a
file4.txt:
test.
What is more interesting about it is that your program doesn't have to have write permissions for those files, because it doesn't actually open them.
For example, when I run sudo -s to change user to root, create a directory as root, and try to run the following command as my regular user (rsp in my case) like this:
# su rsp -c '../fdtest >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt'
I get an error:
bash: file1.txt: Permission denied
But if I do the redirection outside of su:
# su rsp -c '../fdtest' >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt
(note the difference in single quotes) it works and I get:
# ls -alp
total 56
drwxr-xr-x 2 root root 4096 Jun 23 15:05 ./
drwxrwxr-x 3 rsp rsp 4096 Jun 23 15:01 ../
-rw-r--r-- 1 root root 5 Jun 23 15:05 file1.txt
-rw-r--r-- 1 root root 39 Jun 23 15:05 file2.txt
-rw-r--r-- 1 root root 2 Jun 23 15:05 file3.txt
-rw-r--r-- 1 root root 6 Jun 23 15:05 file4.txt
which are 4 files owned by root in a directory owned by root - even though the script didn't have permissions to create those files.
Another example would be using chroot jail or a container and run a program inside where it wouldn't have access to those files even if it was run as root and still redirect those descriptors externally where you need, without actually giving access to the entire file system or anything else to this script.
The point is that you have discovered a very interesting and useful mechanism. You don't have to open all the files inside of your script as was suggested in other answers. Sometimes it is useful to redirect them during the script invocation.
To sum it up, this:
echo "This"
is actually equivalent to:
echo "This" >&1
and running the program as:
./program >file.txt
is the same as:
./program 1>file.txt
The number 1 is just a default number and it is stdout.
But even this program:
#!/bin/bash
echo "This"
can produce a "Bad descriptor" error. How? When run as:
./fdtest2 >&-
The output will be:
./fdtest2: line 2: echo: write error: Bad file descriptor
Adding >&- (which is the same as 1>&-) means closing the standard output. Adding 2>&- would mean closing the stderr.
You can even do a more complicated thing. Your original script:
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
when run with just:
./fdtest
prints:
This
is
./fdtest: line 4: 3: Bad file descriptor
./fdtest: line 5: 4: Bad file descriptor
But you can make descriptors 3 and 4 work, but number 1 fail by running:
./fdtest 3>&1 4>&1 1>&-
It outputs:
./fdtest: line 2: echo: write error: Bad file descriptor
is
a
test.
If you want descriptors both 1 and 2 fail, run it like this:
./fdtest 3>&1 4>&1 1>&- 2>&-
You get:
a
test.
Why? Didn't anything fail? It did but with no stderr (file descriptor number 2) you didn't see the error messages!
I think it's very useful to experiment this way to get a feeling of how the descriptors and their redirection work.
Your script is a very interesting example indeed - and I argue that it is not broken at all, you were just using it wrong! :)
It's failing because those file descriptors don't point to anything! The normal default file descriptors are the standard input 0, the standard output 1, and the standard error stream 2. Since your script isn't opening any other files, there are no other valid file descriptors. You can open a file in bash using exec. Here's a modification of your example:
#!/bin/bash
exec 3> out1 # open file 'out1' for writing, assign to fd 3
exec 4> out2 # open file 'out2' for writing, assign to fd 4
echo "This" # output to fd 1 (stdout)
echo "is" >&2 # output to fd 2 (stderr)
echo "a" >&3 # output to fd 3
echo "test." >&4 # output to fd 4
And now we'll run it:
$ ls
script
$ ./script
This
is
$ ls
out1 out2 script
$ cat out*
a
test.
$
As you can see, the extra output was sent to the requested files.
To add on to the answer from rsp and respond the question in the comments of that answer from #MattClimbs.
You can test if the file descriptor is open or not by attempting to redirect to it early and if it fails, open the desired numbered file descriptor to something like /dev/null. I do this regularly within scripts and leverage the additional file descriptors to pass back additional details or responses beyond return #.
script.sh
#!/bin/bash
2>/dev/null >&3 || exec 3>/dev/null
2>/dev/null >&4 || exec 4>/dev/null
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
The stderr is redirected to /dev/null to discard the possible bash: #: Bad file descriptor response and the || is used to process the following command exec #>/dev/null when the previous one exits with a non zero status. In the event that the file descriptor is already opened, the two tests would return a zero status and the exec ... command would not be executed.
Calling the script without any redirections yields:
# ./script.sh
This
is
In this case, the redirections for a and test are shipped off to /dev/null
Calling the script with a redirection defined yields:
# ./script.sh 3>temp.txt 4>>temp.txt
This
is
# cat temp.txt
a
test.
The first redirection 3>temp.txt overwrites the file temp.txt while 4>>temp.txt appends to the file.
In the end, you can define default files to redirect to within the script if you want something other than /dev/null or you can change the execution method of the script and redirect those extra file descriptors anywhere you want.

Why does this command not take input from file inspite of rediection?

On executing, the cisco anyconnect VPN client takes the VPN IP, password, and some other inputs from the terminal. However, instead of typing it every time, I wrote down the values in a file and tried to redirect the file into the vpn client command.
/opt/cisco/anyconnect/bin/vpn < vpndetails.txt
However, it seems that the command ignores the file redirection and still prompts for input. How is it possible? Does the code read from some other file-descriptor (not 0) and still reads it from the terminal? Is it possible?
Note: I know it isn't a good practice to store your passwords in a file, but I don't care for now.
The question "Is it possible" has the answer "yes".
The code for the anyconnect vpn probably reads /dev/tty, as explained in the comments by Chepner e.a.As a fun exercise, try this script:
#! /bin/sh
read -p "STDIN> " a
read -p "TERMINAL> " b < /dev/tty
read -p "STDIN> " c
echo "Read $a and $c from stdio and $b from the terminal"
and, for example, ls / | bash this_script.sh.
However, if you wish to use Cisco Autoconnect without passwords, you should investigate the Always On with Trusted Network detection feature and user certificates.
Writing to /dev/tty in the hope that it will be picked-up by the script does not work:
ljm#verlaine[tmp]$ ls | bash test.sh &
[3] 10558
ljm#verlaine[tmp]$ echo 'plop' > /dev/tty
plop
[3]+ Stopped ls | bash test.sh
ljm#verlaine[tmp]$ fg
ls | bash test.sh
(a loose enter is given)
Read a_file and b_file from stdio and from the terminal

How execute bash script line by line?

If I enter bash -x option, it will show all the line. But the script will execute normaly.
How can I execute line by line? Than I can see if it do the correct thing, or I abort and fix the bug. The same effect is put a read in every line.
You don't need to put a read in everyline, just add a trap like the following into your bash script, it has the effect you want, eg.
#!/usr/bin/env bash
set -x
trap read debug
< YOUR CODE HERE >
Works, just tested it with bash v4.2.8 and v3.2.25.
IMPROVED VERSION
If your script is reading content from files, the above listed will not work. A workaround could look like the following example.
#!/usr/bin/env bash
echo "Press CTRL+C to proceed."
trap "pkill -f 'sleep 1h'" INT
trap "set +x ; sleep 1h ; set -x" DEBUG
< YOUR CODE HERE >
To stop the script you would have to kill it from another shell in this case.
ALTERNATIVE1
If you simply want to wait a few seconds before proceeding to the next command in your script the following example could work for you.
#!/usr/bin/env bash
trap "set +x; sleep 5; set -x" DEBUG
< YOUR CODE HERE >
I'm adding set +x and set -x within the trap command to make the output more readable.
The BASH Debugger Project is "a source-code debugger for bash that follows the gdb command syntax."
If your bash script is really a bunch of one off commands that you want to run one by one, you could do something like this, which runs each command one by one when you increment a variable LN, corresponding to the line number you want to run. This allows you to just run the last command again super easy, and then you just increment the variable to go to the next command.
Assuming your commands are in a file "it.sh", run the following, one by one.
$ cat it.sh
echo "hi there"
date
ls -la /etc/passwd
$ $(LN=1 && cat it.sh | head -n$LN | tail -n1)
"hi there"
$ $(LN=2 && cat it.sh | head -n$LN | tail -n1)
Wed Feb 28 10:58:52 AST 2018
$ $(LN=3 && cat it.sh | head -n$LN | tail -n1)
-rw-r--r-- 1 root wheel 6774 Oct 2 21:29 /etc/passwd
Have a look at bash-stepping-xtrace.
It allows stepping xtrace.
xargs: can filter lines
cat .bashrc | xargs -0 -l -d \\n bash
-0 Treat as raw input (no escaping)
-l Separate each line (Not by default for performances)
-d \\n The line separator

Execute a command on remote hosts via ssh from inside a bash script

I wrote a bash script which is supposed to read usernames and IP addresses from a file and execute a command on them via ssh.
This is hosts.txt :
user1 192.168.56.232
user2 192.168.56.233
This is myScript.sh :
cmd="ls -l"
while read line
do
set $line
echo "HOST:" $1#$2
ssh $1#$2 $cmd
exitStatus=$?
echo "Exit Status: " $exitStatus
done < hosts.txt
The problem is that execution seems to stop after the first host is done. This is the output:
$ ./myScript.sh
HOST: user1#192.168.56.232
total 2748
drwxr-xr-x 2 user1 user1 4096 2011-11-15 20:01 Desktop
drwxr-xr-x 2 user1 user1 4096 2011-11-10 20:37 Documents
...
drwxr-xr-x 2 user1 user1 4096 2011-11-10 20:37 Videos
Exit Status: 0
$
Why does is behave like this, and how can i fix it?
In your script, the ssh job gets the same stdin as the read line, and in your case happens to eat up all the lines on the first invocation. So read line only gets to see
the very first line of the input.
Solution: Close stdin for ssh, or better redirect from /dev/null. (Some programs
don't like having stdin closed)
while read line
do
ssh server somecommand </dev/null # Redirect stdin from /dev/null
# for ssh command
# (Does not affect the other commands)
printf '%s\n' "$line"
done < hosts.txt
If you don't want to redirect from /dev/null for every single job inside the loop, you can also try one of these:
while read line
do
{
commands...
} </dev/null # Redirect stdin from /dev/null for all
# commands inside the braces
done < hosts.txt
# In the following, let's not override the original stdin. Open hosts.txt on fd3
# instead
while read line <&3 # execute read command with fd0 (stdin) backed up from fd3
do
commands... # inside, you still have the original stdin
# (maybe the terminal) from outside, which can be practical.
done 3< hosts.txt # make hosts.txt available as fd3 for all commands in the
# loop (so fd0 (stdin) will be unaffected)
# totally safe way: close fd3 for all inner commands at once
while read line <&3
do
{
commands...
} 3<&-
done 3< hosts.txt
The problem that you are having is that the SSH process is consuming all of the stdin, so read doesn't see any of the input after the first ssh command has ran. You can use the -n flag for SSH to prevent this from happening, or you can redirect /dev/null to the stdin of the ssh command.
See the following for more information:
http://mywiki.wooledge.org/BashFAQ/089
Make sure the ssh command does not read from the hosts.txt using ssh -n
I have a feeling your question is unnecessarily verbose..
Essentially you should be able to reproduce the problem with:
while read line
do
echo $line
done < hosts.txt
Which should work just fine.. Do you edit the right file? Are there special characters in it? Check it with a proper editor (eg: vim).

How do file descriptors work?

Can someone tell me why this does not work? I'm playing around with file descriptors, but feel a little lost.
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
The first three lines run fine, but the last two error out. Why?
File descriptors 0, 1 and 2 are for stdin, stdout and stderr respectively.
File descriptors 3, 4, .. 9 are for additional files. In order to use them, you need to open them first. For example:
exec 3<> /tmp/foo #open fd 3.
echo "test" >&3
exec 3>&- #close fd 3.
For more information take a look at Advanced Bash-Scripting Guide: Chapter 20. I/O Redirection.
It's an old question but one thing needs clarification.
While the answers by Carl Norum and dogbane are correct, the assumption is to change your script to make it work.
What I'd like to point out is that you don't need to change the script:
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
It works if you invoke it differently:
./fdtest 3>&1 4>&1
which means to redirect file descriptors 3 and 4 to 1 (which is standard output).
The point is that the script is perfectly fine in wanting to write to descriptors other than just 1 and 2 (stdout and stderr) if those descriptors are provided by the parent process.
Your example is actually quite interesting because this script can write to 4 different files:
./fdtest >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt
Now you have the output in 4 separate files:
$ for f in file*; do echo $f:; cat $f; done
file1.txt:
This
file2.txt:
is
file3.txt:
a
file4.txt:
test.
What is more interesting about it is that your program doesn't have to have write permissions for those files, because it doesn't actually open them.
For example, when I run sudo -s to change user to root, create a directory as root, and try to run the following command as my regular user (rsp in my case) like this:
# su rsp -c '../fdtest >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt'
I get an error:
bash: file1.txt: Permission denied
But if I do the redirection outside of su:
# su rsp -c '../fdtest' >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt
(note the difference in single quotes) it works and I get:
# ls -alp
total 56
drwxr-xr-x 2 root root 4096 Jun 23 15:05 ./
drwxrwxr-x 3 rsp rsp 4096 Jun 23 15:01 ../
-rw-r--r-- 1 root root 5 Jun 23 15:05 file1.txt
-rw-r--r-- 1 root root 39 Jun 23 15:05 file2.txt
-rw-r--r-- 1 root root 2 Jun 23 15:05 file3.txt
-rw-r--r-- 1 root root 6 Jun 23 15:05 file4.txt
which are 4 files owned by root in a directory owned by root - even though the script didn't have permissions to create those files.
Another example would be using chroot jail or a container and run a program inside where it wouldn't have access to those files even if it was run as root and still redirect those descriptors externally where you need, without actually giving access to the entire file system or anything else to this script.
The point is that you have discovered a very interesting and useful mechanism. You don't have to open all the files inside of your script as was suggested in other answers. Sometimes it is useful to redirect them during the script invocation.
To sum it up, this:
echo "This"
is actually equivalent to:
echo "This" >&1
and running the program as:
./program >file.txt
is the same as:
./program 1>file.txt
The number 1 is just a default number and it is stdout.
But even this program:
#!/bin/bash
echo "This"
can produce a "Bad descriptor" error. How? When run as:
./fdtest2 >&-
The output will be:
./fdtest2: line 2: echo: write error: Bad file descriptor
Adding >&- (which is the same as 1>&-) means closing the standard output. Adding 2>&- would mean closing the stderr.
You can even do a more complicated thing. Your original script:
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
when run with just:
./fdtest
prints:
This
is
./fdtest: line 4: 3: Bad file descriptor
./fdtest: line 5: 4: Bad file descriptor
But you can make descriptors 3 and 4 work, but number 1 fail by running:
./fdtest 3>&1 4>&1 1>&-
It outputs:
./fdtest: line 2: echo: write error: Bad file descriptor
is
a
test.
If you want descriptors both 1 and 2 fail, run it like this:
./fdtest 3>&1 4>&1 1>&- 2>&-
You get:
a
test.
Why? Didn't anything fail? It did but with no stderr (file descriptor number 2) you didn't see the error messages!
I think it's very useful to experiment this way to get a feeling of how the descriptors and their redirection work.
Your script is a very interesting example indeed - and I argue that it is not broken at all, you were just using it wrong! :)
It's failing because those file descriptors don't point to anything! The normal default file descriptors are the standard input 0, the standard output 1, and the standard error stream 2. Since your script isn't opening any other files, there are no other valid file descriptors. You can open a file in bash using exec. Here's a modification of your example:
#!/bin/bash
exec 3> out1 # open file 'out1' for writing, assign to fd 3
exec 4> out2 # open file 'out2' for writing, assign to fd 4
echo "This" # output to fd 1 (stdout)
echo "is" >&2 # output to fd 2 (stderr)
echo "a" >&3 # output to fd 3
echo "test." >&4 # output to fd 4
And now we'll run it:
$ ls
script
$ ./script
This
is
$ ls
out1 out2 script
$ cat out*
a
test.
$
As you can see, the extra output was sent to the requested files.
To add on to the answer from rsp and respond the question in the comments of that answer from #MattClimbs.
You can test if the file descriptor is open or not by attempting to redirect to it early and if it fails, open the desired numbered file descriptor to something like /dev/null. I do this regularly within scripts and leverage the additional file descriptors to pass back additional details or responses beyond return #.
script.sh
#!/bin/bash
2>/dev/null >&3 || exec 3>/dev/null
2>/dev/null >&4 || exec 4>/dev/null
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
The stderr is redirected to /dev/null to discard the possible bash: #: Bad file descriptor response and the || is used to process the following command exec #>/dev/null when the previous one exits with a non zero status. In the event that the file descriptor is already opened, the two tests would return a zero status and the exec ... command would not be executed.
Calling the script without any redirections yields:
# ./script.sh
This
is
In this case, the redirections for a and test are shipped off to /dev/null
Calling the script with a redirection defined yields:
# ./script.sh 3>temp.txt 4>>temp.txt
This
is
# cat temp.txt
a
test.
The first redirection 3>temp.txt overwrites the file temp.txt while 4>>temp.txt appends to the file.
In the end, you can define default files to redirect to within the script if you want something other than /dev/null or you can change the execution method of the script and redirect those extra file descriptors anywhere you want.

Resources