I'm trying to read commands from a text file and execute each line from a bash script.
#!/bin/bash
while read line; do
$line
done < "commands.txt"
In some cases, if $line contains commands that are meant to run in background, eg command 2>&1 & they will not start in background, and will run in the current script context.
Any ideea why?
if all your commands are inside "commands.txt", essentially, you can call it a shell script. That's why you can either source it, or run it like normal, ie chmod u+x , then you can execute it using sh commands.txt
I don't have anything to add to ghostdog74's answer about the right way to do this, but I can cover why it's failing: The shell parses I/O redirections, backgrounding, and a bunch of other things before it does variable expansion, so by the time $line is replaced by command 2>&1 & it's too late to recognize 2>&1 and & as anything other than parameters to command.
You could improve this by using eval "$line" but even there you'll run into problems with multiline commands (e.g. while loops, if blocks, etc). The source and sh approaches don't have this problem.
Related
I have a simple Bash script:
#!/usr/bin/env bash
read X
echo "X=$X"
When I execute it with ./myscript.sh it works. But when I execute it with cat myscript.sh | bash it actually puts echo "X=$X" into $X.
So this script prints Hello World executed with cat myscript.sh | bash:
#!/usr/bin/env bash
read X
hello world
echo "$X"
What's the benefit of executing a script with cat myscript.sh | bash? Why doesn't do it the same things as if I execute it with ./myscript.sh?
How can I avoid Bash to execute line by line but execute all lines after the STDIN reached the end?
Instead of just running
read X
...instead replace it with...
read X </dev/tty || {
X="some default because we can't read from the TTY here"
}
...if you want to read from the console. Of course, this only works if you have a /dev/tty, but if you wanted to do something robust, you wouldn't be piping from curl into a shell. :)
Another alternative, of course, is to pass in your value of X on the command line.
curl https://some.place/with-untrusted-code-only-idiots-will-run-without-reading \
| bash -s "value of X here"
...and refer to "$1" in your script when you want X.
(By the way, I sure hope you're at least using SSL for this, rather than advising people to run code they download over plain HTTP with no out-of-band validation step. Lots of people do it, sure, but that's making sites they download from -- like rvm.io -- big targets. Big, easy-to-man-in-the-middle-or-DNS-hijack targets).
When you cat a script to bash the code to execute is coming from standard input.
Where does read read from? That's right also standard input. This is why you can cat input to programs that take standard input (like sed, awk, etc.).
So you are not running "a script" per-se when you do this. You are running a series of input lines.
Where would you like read to read data from in this setup?
You can manually do that (if you can define such a place). Alternatively you can stop running your script like this.
I want to execute bash scripts that happen to use Windows/CRLF line endings.
I know of the tofrodos package, and how to fromdos files, but if possible, I'd like to run them without any modification.
Is there an environment variable that will force bash to handle CRLF?
Perhaps like this?
dos2unix < script.sh|bash -s
EDIT: As pointed out in the comments this is the better option, since it allows the script to read from stdin by running dos2unix and not bash in a subshell:
bash <(dos2unix < script.sh)
Here's a transparent workaround for you:
cat > $'/bin/bash\r' << "EOF"
#!/bin/bash
script=$1
shift
exec bash <(tr -d '\r' < "$script") "$#"
EOF
This gets rid of the problem once and for all by allowing you to execute all your system's Windows CRLF scripts as if they used UNIX eol (with ./yourscript), rather than having to specify it for each particular invocation. (beware though: bash yourscript or source yourscript will still fail).
It works because DOS style files, from a UNIX point of view, specify the interpretter as "/bin/bash^M". We override that file to strip the carriage returns from the script and run actual bash on the result.
You can do the same for different interpretters like /bin/sh if you want.
In general I don't understand how to make most commands in a UNIX shell script do loop work the same as they work directly from the command line (using bash).
As a simple test, a script called looping.sh to execute an SQL script (what's in filelist.txt doesn't matter in this case):
for i in $(cat filelist.txt)
do $(sqlplus DB_USER/password#abc #test.sql)
done
results in
looping.sh: line 2: SQL*Plus:: command not found
for each line in filelist.txt. Other variations on the 2nd line don't work, like putting it in quotes etc.
Or, if filelist.txt has names of other sh scripts, let's say a single line in this case called_file1.sh and I want to execute it
for i in $(cat filelist.txt)
do exec $i
done
results in
: not found line 2: exec: called_file1.sh
The files are all in the same folder. I tried variations for the second line like /bin/sh $i, putting it in quotes and so on. What's the magic way to execute a command in the do loop?
$(...) takes the contents and runs it as a command and then returns the output from the command.
So when you write:
for i in $(cat filelist.txt)
do $(sqlplus DB_USER/password#abc #test.sql)
done
what the shell does when it hits the body of the loop is run sqlplus DB_USER/password#abc #test.sql and then it takes the output from that command (whatever it may be) and replaces the $(...) bit with it. So you end up with (not exactly since it happens again every loop but for sake of illustration) a loop that looks like this:
for i in $(cat filelist.txt)
do <output of 'sqlplus DB_USER/password#abc #test.sql' command>
done
and if that output isn't a valid shell command you are going to get an error.
The solution there is to not do that. You don't want the wrapping $() there at all.
for i in $(cat filelist.txt)
do sqlplus DB_USER/password#abc #test.sql
done
In your second example:
for i in $(cat filelist.txt)
do exec $i
done
you are telling the shell that the filename in $i is something that it should try to execute like a binary or executable shell script.
In your case two things are happening here. The filename in $i can't be found and (and this is harder to notice) the filename in $i contains a carriage-return at the end (probably a DOS line-ending file). (That's why the error message is a bit more confused then normal.) (I actually wonder about that since I wouldn't have expected that from an unquoted $i but from a quoted "$i" but I might just be wrong about that.)
So, for this case, you need to both strip the carriage-returns from the file (see point 1 of the "Before asking about problematic code" section of the bash tag info wiki for more about this) and then you need to make sure that filename is an executable script and you have the correct path to it.
Oh, also, exec never returns so that loop will only execute one file ever.
If you want multiple executions then drop exec.
That all being said you Don't Read Lines With For. See Bash FAQ 001 for how to correctly (and safely) read lines from a file.
I have a simple Bash script:
#!/usr/bin/env bash
read X
echo "X=$X"
When I execute it with ./myscript.sh it works. But when I execute it with cat myscript.sh | bash it actually puts echo "X=$X" into $X.
So this script prints Hello World executed with cat myscript.sh | bash:
#!/usr/bin/env bash
read X
hello world
echo "$X"
What's the benefit of executing a script with cat myscript.sh | bash? Why doesn't do it the same things as if I execute it with ./myscript.sh?
How can I avoid Bash to execute line by line but execute all lines after the STDIN reached the end?
Instead of just running
read X
...instead replace it with...
read X </dev/tty || {
X="some default because we can't read from the TTY here"
}
...if you want to read from the console. Of course, this only works if you have a /dev/tty, but if you wanted to do something robust, you wouldn't be piping from curl into a shell. :)
Another alternative, of course, is to pass in your value of X on the command line.
curl https://some.place/with-untrusted-code-only-idiots-will-run-without-reading \
| bash -s "value of X here"
...and refer to "$1" in your script when you want X.
(By the way, I sure hope you're at least using SSL for this, rather than advising people to run code they download over plain HTTP with no out-of-band validation step. Lots of people do it, sure, but that's making sites they download from -- like rvm.io -- big targets. Big, easy-to-man-in-the-middle-or-DNS-hijack targets).
When you cat a script to bash the code to execute is coming from standard input.
Where does read read from? That's right also standard input. This is why you can cat input to programs that take standard input (like sed, awk, etc.).
So you are not running "a script" per-se when you do this. You are running a series of input lines.
Where would you like read to read data from in this setup?
You can manually do that (if you can define such a place). Alternatively you can stop running your script like this.
Is there a better way to save a command line before it it executed?
A number of my /bin/bash scripts construct a very long command line. I generally save the command line to a text file for easier debugging and (sometimes) execution.
My code is littered with this idiom:
echo >saved.txt cd $NEW_PLACE '&&' command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
cd $NEW_PLACE && command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
Obviously updating code in two places is error-prone. Less obvious is that Certain parts need to be quoted in the first line but not the next. Thus, I can not do the update by simple copy-and-paste. If the command includes quotes, it gets even more complicated.
There has got to be a better way! Suggestions?
How about creating a helper function which logs and then executes the command? "$#" will expand to whatever command you pass in.
log() {
echo "$#" >> /tmp/cmd.log
"$#"
}
Use it by simply prepending log to any existing command. It won't handle && or || though, so you'll have to log those commands separately.
log cd $NEW_PLACE && log command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
are you looking for set -x (or bash -x)? This writes every command to standard out after executing.
use script and you will get archived everything.
use -x for tracing your script, e.g. run them as bash -x script_name args....
use set -x in your current bash (you will get echoed your commands with substitued globs and variables
combine 2 and 3 with the 1
If you just execute the command file immediately after creating it, you will only need to construct the command once, with one level of escapes.
If that would create too many discrete little command files, you could create shell procedures and then run an individual one.
(echo fun123 '()' {
echo echo something important
echo }
) > saved.txt
. saved.txt
fun123
It sounds like your goal is to keep a good log of what your script did so that you can debug it when things go bad. I would suggest using the -x parameter in your shebang like so:
#!/bin/sh -x
# the -x above makes bash print out every command before it is executed.
# you can also use the -e option to make bash exit immediately if any command
# returns a non-zero return code.
Also, see my answer on a previous question about redirecting all of this debug output to a log when --log is passed into your shell script. This will redirect all stdout and stderr. Occasionally, you'll still want to write to the terminal to give the user feedback. You can do this by saving stdout to a new file descriptor and using that with echo (or other programs):
exec 3>&1 # save stdout to fd 3
# perform log redirection as per above linked answer
# now all stdout and stderr will be redirected to the file and console.
# remove the `tee` command if you want it to go just to the file.
# now if you want to write to the original stdout (i.e. terminal)
echo "Hello World" >&3
# "Hello World" will be written to the terminal and not the logs.
I suggest you look into the xargs command. It was made to solve the problem of programtically building up argument lists and passing them off to executables for batch processing
http://en.wikipedia.org/wiki/Xargs