assign and expand shell variable in same command line [duplicate] - bash

This question already has answers here:
Why can't I specify an environment variable and echo it in the same command line?
(9 answers)
Closed 2 years ago.
I want to assign one or multiple variables at the beginning of a command line in my shell to reuse it in the command invocation. I'm confused of how my shell behaves and want to understand what is happening.
I'm using ZSH but am also interested what the "standard" posix behavior is.
1: % V=/ echo $V # echo is a shell built-in?!?
expected: /. actual: ""
2: % V=/ ls $V # ls is a command
expected: ls /. actual: ls
3: % V=/ ; echo $V
expected: "". actual: /
Here I thought that the semicolon would be equivalent to a new shell line and that I'd need export.
4: % V=/ ; ls $V
expected: ls. actual: ls /
I'm mostly surprised by lines 1 and 2. Is there any ZSH settings that could cause this or do I just start to use a semicolon to use variables in this way?

Variable expansion happens before the command is run, i.e. before the value is assigned to the variable in lines 1 and 2.
export is needed when you need to export the variable to a subshell. A semicolon doesn't introduce a subshell, but causes the assignment to be run before the next command, so the shell now expands the variable by its new value.

Your line 1 would work if you would allow the variable expansion to occur inside the echo and don't force it, before echo gets a chance to run, for instance by
V=/ zsh -c 'echo $V'
or by
V=/ eval 'echo $V'
It doesn't matter that echo is a builtin command. The same idea applies to every command.
Since commands can be separated either by semicolons or by linefeeds, your line 3 is equivalent to
V=/
echo $V
which makes it obvious, why the substitution works in this case.

Related

How to get value from file in sh? [duplicate]

This question already has answers here:
How to read a file into a variable in shell?
(9 answers)
Difference between sh and Bash
(11 answers)
Closed 4 years ago.
Setup:
File a contains:
22
File b contains:
12
I have shell script 1.sh:
#!/bin/sh
a=$(< a)
b=$(< b)
echo $(($a*$b)) > c
The script should get values from file a and b, multiply them *, and save to file c.
However after setting permission $ chmod a+rx 1.sh and running it $ ./1.sh it returns an error:
./1.sh: 5: ./1.sh: arithmetic expression: expecting primary: "*"
This error occurs because the variables $a and $b doesn't get value form files a and b.
If I echo $a and echo $b it returns nothing;
If I define a=22 and b=12 values in the script it works;
I also tried other ways of getting contents of files like a=$(< 'a'), a=$(< "a"), a=$(< "~/a"), and even a=$(< cat a). None of those worked.
Plot Twist:
However, if I change shebang line to #!/bin/bash so that Bash shell is used - it works.
Question:
How to properly get data from file in sh?
Ignore everything from file a and b but numbers:
#!/bin/sh
a=$(tr -cd 0-9 < a)
b=$(tr -cd 0-9 < b)
echo $(($a*$b))
See: man tr
If you're looking for "true" Bourne-Shell compatibility, as opposed to Bash's emulation, then you have to go old school:
#!/bin/sh
a=`cat a`
b=`cat b`
expr $a \* $b > c
I tried your original example under #!/bin/sh on both macOS and Linux (FC26), and it behaved properly, assuming a and b had UNIX line-endings. If that can't be guaranteed, and you need to run under #!/bin/sh (as emulated by bash), then something like this will work:
#!/bin/sh
a=$(<a)
b=$(<b)
echo $(( ${a%%[^0-9]*} * ${b%%[^0-9]*} )) > c
There are many ways. One obvious way is to pipe in a sub-process by Command Substitution:
A=$(cat fileA.txt) # 22
B=$(cat fileB.txt) # 12
echo $((A*B))
# <do it in your head!>
If there are any other problems with multiple lines, you need to look into how to use the Bash variable $IFS (Internal File Separator). Usually IFS is defined by: IFS=$' \t\n', so if you need to be able to reliably read lines endings from both Windows and Linux EOL's you may need to modify it.
ADDENDUM:
Process Substitution
Bash, Zsh, and AT&T ksh{88,93} (but not pdksh/mksh) support process
substitution. Process substitution isn't specified by POSIX. You may
use NamedPipes to accomplish the same things. Coprocesses can also do
everything process substitutions can, and are slightly more portable
(though the syntax for using them is not).
This also means that most Android OS does not allow process substitution, since their shells are most often based on mksh.
From man bash:
Process Substitution
Process substitution allows a process's input or output to be referred to using a filename. It takes the form of <(list) or >(list). The
process list is run asynchronously, and its input or output appears as a filename. This filename is passed as an argument to the current
command as the result of the expansion. If the >(list) form is used, writing to the file will provide input for list. If the <(list) form
is used, the file passed as an argument should be read to obtain the output of list. Process substitution is supported on systems that sup-
port named pipes (FIFOs) or the /dev/fd method of naming open files.
When available, process substitution is performed simultaneously with parameter and variable expansion, command substitution, and arithmetic
expansion.

When is variable taken into account if variable assignment is not followed by a semicolon?

When I execute var=blah echo -n $var, then nothing is printed which is an expected behavior because bash first expands $var with an empty string, then sets up a temporary environment and puts var=blah in it and finally echo runs with an empty string as an argument. On the other hand, when I execute IFS=. read a b <<< "k.l", then new value for IFS is taken into account. When is variable taken into account if variable statement is not followed by a semicolon?
An assignment or several in a simple command by themselves cause the variables to be set as variables in the shell. If they are exported, they'll be inherited to the environment of any commands the shell executes.
An assignment in a simple command with an actual command to run does not change the variable in the shell, but only sets it in the environment of the command being executed. (A "simple command" is the usual kind of command, as opposed to a pipeline or a compound command. See the standard for the definition.)
Let's compare a couple of situations with a test script:
$ cat test.sh
echo "var: $var" # print 'var' from the environment
echo "arg: $1" # print the first command line arg
$ unset var
Here, var is set in the environment of test.sh, but the shell doesn't have that variable, so the one on the command line expands to nothing:
$ var=foo sh test.sh "$var"
var: foo
arg:
Here, var is set in the shell, so the one on the command line is expanded, but it's not set in the test.sh's environment:
$ var=foo; sh test.sh "$var"
var:
arg: foo
If we export it, it goes to the environment, too:
$ export var; sh test.sh "$var"
var: foo
arg: foo
Conceptually, you could think of read as a program like any other, so IFS set on the same command line is inherited to it, and affects the how read works. Similarly to var above.
Though IFS and read are slightly exceptional in that Bash doesn't inherit IFS from the environment (dash does), but resets it, and IFS isn't exported by default but IFS=.; read a still causes read to use the changed IFS. Of course read is a builtin, so it sees the shell's variables, not just the exported ones. I can't think of any other shell builtin that would use IFS similarly, so I can't compare.
read always reads a single line of input, regardless of how many arguments it is given; once the line is read, read itself uses the value of IFS in its environment to split the string into enough words to populate the variables named by its positional arguments.

Error "command not found" when setting value to variable [duplicate]

This question already has answers here:
Indirect variable assignment in bash
(7 answers)
What is indirect expansion? What does ${!var*} mean?
(6 answers)
Closed 6 years ago.
I have the following test.sh script:
#!/bin/bash
foo=0
bar=foo;
${bar}=1
echo $foo;
Output:
./test.sh: line 4: foo=1: command not found
0
Why the "command not found" error? How to change script to "echo $foo" outputs 1?
That's not the way to do indirection unfortunately. To do what you want you could use printf like so
printf -v "$bar" "1"
which will store the value printed (here 1 in the variable name given as an argument to -v which when $bar expands here will be foo
Also, you could use declare like
declare "$bar"=1
which will do variable substitution before executing the declare command.
In your attempt the order of bash processing is biting you. Before variable expansion is done the line is split into commands. A command can include variable assignments, however, at that point you do not have a variable assignment of the form name=value so that part of the command is not treated as an assignment. After that, variable expansion is done and it becomes foo=1 but by then we're done deciding if it's an assignment or not, so just because it now looks like one doesn't mean it gets treated as such.
Since it was not processed as a variable assignment, it must not be treated as a command. You don't have a command named foo=1 in your path, so you get the error of command not found.
You need to use the eval function, like
#!/bin/bash
foo=0
bar=foo;
eval "${bar}=1"
echo $foo;
The ${bar}=1 will first go through the substitution process so it becomes foo=1, and then the eval will evaluate that in context of your shell

Bash: assignment of variable on same line not altering echo behavior [duplicate]

This question already has answers here:
Why can't I specify an environment variable and echo it in the same command line?
(9 answers)
Closed 2 years ago.
a=2
a=3 echo $a #prints 2
can someone explain why would anyone use the above code in line-2.
a=3 will be ignored as there is no "enter" after it.
But I saw it in script like above and not sure about the purpose.
$a is expanded by the shell (Bash) before a=3 is evaluated. So echo sees its argument as 2, which is what it prints. (If you set -x you can see that what gets executed is a=3 echo 2.)
var=val command is used to set an environment variable to be seen by command during its execution, but nowhere else. So when command reads environment variables (e.g. using getenv()), to it $var is val.
If echo were to look up $a while running, it would have the value 3.
The parent process expands a before the environment is setup in which it sets a different value (3) for a. Despite the fact that variable a set to 3 by the echo executes, the value was expanded already. So it's too late.
You can instead do:
a=3 bash -c 'echo $a'

Command works normally in Shell, but not while using a script

I used this command in my Bash Shell:
printf $VAR1 >> `printf $VAR2`
and it normally worked. But when I write this into the script file and run it in Shell, it does not work. File "script.sh" contains this:
#!/bin/bash
printf $VAR1 >> `printf $VAR2`
and the output in Shell is:
script.sh: line2: `printf $VAR2`: ambiguous redirect
I donĀ“t know, how is this possible, because the command is absolutely the same. And of course, I run the script on the same system and in the same Shell window.
Thank you for your help.
There are 3 points worth addressing here:
Shell variables vs. environment variables:
Scripts (unless invoked with . / source) run in a child process that only sees the parent [shell]'s environment variables, not its regular shell variables.
This is what likely happened in the OP's case: $VAR1 and $VAR2 existed as regular shell variables, but not environment variables, so script script.sh didn't see them.
Therefore, for a child process to see a parent shell's shell variables, the parent must export them first, as a result of which they (also) become environment variables: export VAR1=... VAR2=...
Bash's error messages relating to output redirection (>, >>):
If the filename argument to a an output redirection is an - unquoted command substitution (`...`, or its modern equivalent, $(...)) - i.e., the output from a command - Bash reports error ambiguous redirect in the following cases:
The command output has embedded whitespace, i.e., contains more than one word.
The command output is empty, which is what likely happened in the OP's case.
As an aside: In this case, the error message's wording is unfortunate, because there's nothing ambiguous about a missing filename - it simply cannot work, because files need names.
It is generally advisable to double-quote command substitutions (e.g., >> "$(...)") and also variable references (e.g., "$VAR2"): this will allow you to return filenames with embedded whitespace, and, should the output be unexpectedly empty, you'll get the (slightly) more meaningful error message No such file or directory.
Not double-quoting a variable reference or command substitution subjects its value / to so-called shell expansions: further, often unintended interpretation by the shell.
The wisdom of using a command substitution to generate a filename:
Leaving aside that printf $VAR2 is a fragile way to print the value of variable $VAR2 in general (the robust form again involves double-quoting: printf "$VAR2", or, even more robustly, to rule out inadvertent interpretation of escape sequences in the variable value, printf %s "$VAR2"), there is no good reason to employ a command substitution to begin with if all that's needed is a variable's value:
>> "$VAR2" is enough to robustly specify the value of variable $VAR2 as the target filename.
I tried this on my Mac (10.11.1) in a terminal window and it worked fine.
Are you sure your default shell is bash?
echo $SHELL
Did you use EXPORT to set your shell vars?
$ export VAR1="UselessData"
$ export VAR2="FileHoldingUselessData"
$ ./script.sh
$ cat FileHoldingUselessData
UselessData$
However.... echo I think does a better job since with printf the output terminates with the first space so....
$ cat script.sh
#!/bin/bash
echo $VAR1 >> `printf $VAR2`
$ ./script.sh
$ cat FileHoldingUselessData
Some Useless Data
Which leads me to believe you might want to just use echo instead of printf all together..
#!/bin/bash
echo $VAR1 >> `echo $VAR2`

Resources