I isolated a problem in my script to this small example. That's what I get:
$ cmd="test \"foo bar baz\""
$ for i in $cmd; do echo $i; done
test
"foo
bar
baz"
And that's what I expected:
$ cmd="test \"foo bar baz\""
$ for i in $cmd; do echo $i; done
test
"foo bar baz"
How can I change my code to get the expected result?
UPDATE Maybe my first example was not good enough. I looked at the answer of Rob Davis, but I couldn't apply the solution to my script. I tried to simplify my script to describe my problem better. This is the script:
#!/bin/bash
function foo {
echo $1
echo $2
}
bar="b c"
baz="a \"$bar\""
foo $baz
This it the expected output compared to the output of the script:
expected script
a a
"b c" "b
First, you're asking the double-quotes around foo bar baz to do two things simultaneously, and they can't. You want them to group the three words together, and you want them to appear as literals. So you'll need to introduce another pair.
Second, parsing happens when you set cmd, and cmd is set to a single string. You want to work with it as individual elements, so one solution is to use an array variable. sh has an array called #, but since you're using bash you can just set your cmd variable to be an array.
Also, to preserve spacing within an element, it's a good idea to put double quotes around $i. You'd see why if you put more than one space between foo and bar.
$ cmd=(test "\"foo bar baz\"")
$ for i in "${cmd[#]}"; do echo "$i"; done
See this question for more details on the special "$#" or "${cmd[#]}" parsing feature of sh and bash, respectively.
Update
Applying this idea to the update in your question, try setting baz and calling foo like this:
$ baz=(a "\"$bar\"")
$ foo "${baz[#]}"
Why quote it in the first place?
for i in test "foo bar baz"; do echo $i; done
Related
I have a bash script where I pass multiple arguments in flags (previously positional arguments, still had the same problem). Inside the script I activate and deactivate different conda virtual environments to run different programs.
I would like to add an option for the script to stop if anything goes wrong in the middle (it is a long workflow with many steps, some of which are lengthy and/or computationally costly). For this I thought of adding set -e at the beginning of the script.
However this makes the script stops at the first activation step since the conda activate commands try to take all the arguments I pass to the script as theirs too. Example:
user#pc$ bash myscript.sh -a file1 -b file2 -c path1 -d string1
activate does not accept more than one argument:
['-a', 'file1', '-b', 'file2', '-c', 'path1', '-d', 'string1']
user#pc$
Somewhat unrelated, please note how conda parses the flag and the argument content as space-delimited separate arguments.
Inside the script I have:
#!/bin/bash
set -e
... [stuff here, defining all the flags] ...
source /home/user/programs/miniconda3/bin/activate
... [the rest of the script]
I have browsed around here and in github with little success. I've seen many threads of people having trouble with space-containing arguments and conda, but my problem is not exactly that one. I saw a GitHub issue where they suggested to remove #args of the conda activate script, however 1) this may break things and 2) I have multiple environments and keeping track of such a workaround for every environment is not very optimized.
My first question is: can it be somehow specified that arguments of the parent script are NOT taken by the conda activate steps?
In the end, what I want to do is to be able to stop the script if something goes wrong in the middle. Therefore, my second question is: Is there another way to stop the script if something goes wrong, e.g. for every major program in the script to contemplate whether to continue or not? What would be the best practice?
Please let me know if anything isn't clear, this is my first time posting here.
Thanks a lot!
You'll need to clear the positional parameters
#!/bin/bash
set -e
... [stuff here, defining all the flags] ...
# if you need, store the current positional params
args=("$#")
# clear the params
set --
source /home/user/programs/miniconda3/bin/activate
... [the rest of the script]
Why source activate? In my experience, when sourcing a file you're generally defining common functions, so there should not be any execution of code. If you are expecting to execute code, though, then you need to understand that it will be run as if it were actually typed in at that point in the calling script itself, so if it's going to process the positional parameters, they'd better be in a state to be processed (i.e. see glenn jackman's answer).
Can your code work by just calling activate instead of sourcing it, passing in only the parameters that are meant for it?
Added:
I oversimplified a bit when I said "it will be run as if it were actually typed in at that point"; my apologies for the confusion, I should have been more precise. It executes the sourced file (the activate script in your case) in the context of the current shell environment, as opposed to executing it in a sub-shell. The activate script is being executed, though, make no mistake about that.
From the man page:
Read and execute commands from filename in the current shell
environment and return the exit status of the last command executed
from filename. [...] If any arguments are supplied, they become the
positional parameters when filename is executed. Otherwise the
positional parameters are unchanged.
I didn't know that last part, and that may be the key to what you need to do. Be careful if and how you manipulate the positional parameters inside of the sourced script, as it could affect the calling script:
</tmp/so2603> $ cat foo
#!/usr/bin/env bash
echo -n "FOO #1:"; for i; do echo -n " <$i>"; done; echo
. bar "$3"
echo -n "FOO #2:"; for i; do echo -n " <$i>"; done; echo
</tmp/so2603> $ cat bar
#!/usr/bin/env bash
echo -n " BAR #1:"; for i; do echo -n " <$i>"; done; echo
shift
echo -n " BAR #2 (after shift):"; for i; do echo -n " <$i>"; done; echo
if [[ -n "$BAZ" ]]; then
set -- "baz"
echo -n " BAR #3 (after set):"; for i; do echo -n " <$i>"; done; echo
fi
running them first without setting BAZ, and then with:
</tmp/so2603> $ ./foo one two "three four" five
FOO #1: <one> <two> <three four> <five>
BAR #1: <three four>
BAR #2 (after shift):
FOO #2: <one> <two> <three four> <five>
</tmp/so2603> $ BAZ=true ./foo one two "three four" five
FOO #1: <one> <two> <three four> <five>
BAR #1: <three four>
BAR #2 (after shift):
BAR #3 (after set): <baz>
FOO #2: <baz>
The positional parameters inside of a function, though, are in a different/separate scope from the "global" positional parameters, so you can source from a function and avoid them altogether:
</tmp/so2603> $ cat foo
#!/usr/bin/env bash
call_bar()
{
. bar
}
echo -n "FOO #1:"; for i; do echo -n " <$i>"; done; echo
call_bar
echo -n "FOO #2:"; for i; do echo -n " <$i>"; done; echo
The bar script remains unchanged, giving you:
</tmp/so2603> $ ./foo one two "three four" five
FOO #1: <one> <two> <three four> <five>
BAR #1:
BAR #2 (after shift):
FOO #2: <one> <two> <three four> <five>
</tmp/so2603> $ BAZ=true ./foo one two "three four" five
FOO #1: <one> <two> <three four> <five>
BAR #1:
BAR #2 (after shift):
BAR #3 (after set): <baz>
FOO #2: <one> <two> <three four> <five>
FYI, there are lots of pros and cons to using set -e:
BashFaq - Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?
David Pashley's Writing Robust Bash Shell Scripts
I generally prefer to use it when I need strict error checking, but each script and programmer is unique and should be evaluated independently.
On many websites, "$" is written at the beginning when introducing the Linux command.
But of course, this will result in a "$: command not found" error.
To avoid this it is necessary to delete or replace "$" every time, but it is troublesome.
So, if the beginning of the input command is "$", I think that it would be good if I could ignore "$", is it possible?
If you really need this, you can create a file in a directory that is in your $PATH. The file will be named $ and will contain
#!/bin/bash
exec "$#"
Make it executable, then you can do
$ echo foo bar
foo bar
$ $ echo foo bar
foo bar
$ $ $ echo foo bar
foo bar
$ $ $ $ echo foo bar
foo bar
Note that this does not affect variable expansion in any way. It only interprets a standalone $ as the first word in the command line as a valid command.
I just noticed a problem with this: It works for calling commands, but not for shell-specific constructs:
$ foo=bar
$ echo $foo
bar
$ $ foo=qux
/home/jackman/bin/$: line 2: exec: foo=qux: not found
and
$ { echo hello; }
hello
$ $ { echo hello; }
bash: syntax error near unexpected token `}'
In summary, everyone else is right: use your mouse better.
Yes it is possible for you to ignore the command prompt, when copying commands from web sites. Use the shift and arrow keys to ignore the prompt. This will also help you to ignore the use of the # sign, which is used to indicate commands, which need administrative privileges.
Sorry if the title isn't very clear. So, I'm trying to echo a variable in bash after using read to get a variable ($A in this example) from a user, like so:
foobar="good"
A=bar
B="\$foo$A"
echo $B
But the output of echoing $B is "$foobar", instead of "good", which is what I want it to be. I've tried using eval $B at the end, but it just told me that "good" isn't a command. eval $(echo $B) didn't work either, it also told me that "good" isn't a command. echo $(eval $B) also told me that "good" isn't a command, and also prints an empty line.
To help clarify why I need to do this: foobar is a variable that contains the string "good", and foobaz contains "bad", and I want to let a user choose which string is echoed by typing either "bar" or "baz" while a program runs read A (in this example I just assigned A as bar to make this question more understandable).
If anyone could please tell me how I can make this work, or offer related tips, I'd appreciate it, thanks.
You're looking for indirect references:
$ foobar="hello"
$ A=bar
$ B="foo$A"
$ echo "$B" "${!B}"
foobar hello
You should take a look at the How can I use variable variables FAQ for more information about those (and associative arrays).
Say I'm running some command foo, which prompts the user for various things. I want to provide values for the first few prompts, but enter the rest manually (i.e. on stdin).
How can I do this? I've tried
echo -e "foo\nbar\nbaz" | foo
This accepts all the inputs, but then gets an EOF from the input stream. I've also tried
foo <(echo -e "foo\nbar\nbaz" & cat /dev/stdin)
which didn't work either.
The main problem here is most likely that foo is not designed to take a filename as an argument. (Keep in mind that <(...) doesn't pass ...'s output on standard-input; rather, it gets expanded to a special filename that can be read from to obtain ...'s output.) To fix this, you can add another <:
foo < <(echo -e "foo\nbar\nbaz" ; cat /dev/stdin)
or use a pipeline:
{ echo -e "foo\nbar\nbaz" ; cat /dev/stdin ; } | foo
(Note that I changed the & to ;, by the way. The former would work, but is a bit strange, given that you intend for echo to handle the first several inputs.)
ask user for what you want and then relay that to your command
echo "Question 1: "; read ans1;
echo "Question 2: "; read ans2;
./foo bar bar "$ans1" baz "$ans2"
maybe like that? it's simple and efficient :)
I am hoping to do something like:
echo 1 2 | read foo bar
To set two new variables, foo and bar, to the values 1 and 2 respectively. (In reality, "echo 1 2" will be an awk / cut invocation for an external data source.)
I am finding that foo and bar do not exist after this line which makes me wonder if it is a scoping issue? I have only used read in while loops and in those cases the loop body was able to access the variables.
Pipes execute in a sub-shell. As such, the variables foo and bar are created, 1 and 2 are stored in them, then the subshell exits and you return to the parent shell in which these variables do not exist.
One way to read into variables as you appear to want is with a "here string"
read foo bar <<<"1 2"
Which will do what you expected the pipe version to do.
This is non-portable, however, and some shells will not support it. You can use the "here document" form instead, which is broadly supported.
$ read foo bar <<EOF
> 1 2
> EOF
Note that EOF here can be any unique string. A here document will store all lines until one that contains EOF, or whatever marker you chose, and nothing else. In this case the behavior is also identical with the previous example (but harder to copy and paste and longer to type).
What's going on here?
Both the "here document" and the "here string" are ways to represent text passed to standard input without having to enter it interactively. It is functionally equivalent to just saying read foo bar, hitting enter, then manually writing 1 2 and hitting enter again.
Instead of pipe, you can do something like this -
[jaypal:~/Temp] exec 3< <(echo "Jaypal Singh")
[jaypal:~/Temp] while read word1 word2 ; do echo "$word1 $word2"; done <&3
Jaypal Singh
[jaypal:~/Temp] exec 3< <(echo "Jaypal Singh")
[jaypal:~/Temp] while read word1 word2 ; do echo "$word1"; done <&3
Jaypal
Another easy solution - for some cases it might be useful:
echo 1 2 | { read foo bar; echo $foo $bar; }
Of course, like in original question, instead echo commands there may be more complex processing.
thank you bash for running a subshell when piping, now we cannot read anymore multiple variables at the same time !
grep -w regexp file | read var1 var2 var3
there is no solution to replace this KSH functionality. The solution :
'read <<< $(command)'
is bourne and korn shell incompatible.