I have a function defined in my .zshenv like this:
function my_func() {
echo $1
}
I then execute a bash script from the zsh which its content is:
type my_func
I got an error: /tmp/test.bash: line 3: type: my_func: not found
While if I type type my_func from the zsh I got: my_func is a shell function from
Is there a way to use zsh defined function in a bash script? It seems to work for the exported variables
How bash does it
Bash itself can export its functions to other bash shells. It does so by exporting a string environmental variable of the form:
BASH_FUNC_functionNameHere%%=() { functionBodyHere; }
So in theory you could use the following zsh-command to export your function for bash:
export "BASH_FUNC_my_func%%=() { $(echo; typeset -f my_func | tail -n+2) }"
However, this does not work because zsh doesn't allow %% in identifiers.
Workaround
Depending on how you start bash, you might be able to inject the function definition into the bash process.
When running a bash script like bash myScript or ./myScript you can use bash -c "$(typeset -f my_func); export -f my_func; myScript" instead.
When starting an interactive bash shell using bash, you can use bash -c "$(typeset -f my_func); export -f my_func; exec bash" instead.
The right way
Either way, your function has to be a polyglot. That is, the source code of the function has to be understood by both zsh and bash. Above approach is not really viable if you want to export many functions or want to call many bash scripts.
It would be easier to define each of your functions inside its own script file and add the locations of those scripts to $PATH. That way you can call your "functions" from every shell and they will always work independently from your current shell.
Replacing the functions by script files only works if your functions don't want to modify the parent shell. cd or setting variables has no effect on the caller. If you want to do stuff like this, you can still use a script file, but then have to source it using . myFunctionFile. For sourcing, the source code has to be a polyglot again.
I defined the following which function as recommended in man which:
The recommended way to use this utility is by adding an alias (C shell)
or shell function (Bourne shell) like the following:
which()
{
(alias; declare -f) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $#
}
export -f which
Unlike /usr/bin/which which only finds commands, this function finds commands, aliases and functions. My question is why is (alias; declare -f) being piped into /usr/bin/which $#?
/usr/bin/which is not built into the shell. Consequently, it has no way to access shell-internal state (like aliases or function definitions) unless that content is fed into it.
That's what's being done here.
However, that's completely unnecessary on any modern shell. Feeding shell syntax info an external program to be parsed there is innately unreliable compared to having the shell itself give you the output you need.
Don't do this. Use the shell built-in type instead.
I'm trying to access a variable declared by previous command (inside a Makefile).
Here's the Makefile:
all:
./script1.sh
./script2.sh
Here's the script declaring the variable I want to access,script1.sh:
#!/usr/bin/env bash
myVar=1234
Here's the script trying to access the variable previously defined, script2.sh:
#!/usr/bin/env bash
echo $myVar
Unfortunately when I run make, myVar isn't accessible. Is there an other way around? Thanks.
Make will run each shell command in its own shell. And when the shell exits, its environment is lost.
If you want variables from one script to be available in the next, there are constructs which will do this. For example:
all:
( . ./script1.sh; ./script2.sh )
This causes Make to launch a single shell to handle both scripts.
Note also that you will need to export the variable in order for it to be visible in the second script; unexported variables are available only to the local script, and not to subshells that it launches.
UPDATE (per Kusalananda's comment):
If you want your shell commands to populate MAKE variables instead of merely environment variables, you may have options that depend on the version of Make that you are running. For example, in BSD make and GNU make, you can use "variable assignment modifiers" including (from the BSD make man page):
!= Expand the value and pass it to the shell for execution and
assign the result to the variable. Any newlines in the result
are replaced with spaces.
Thus, with BSD make and GNU make, you could do this:
$ cat Makefile
foo!= . ./script1.sh; ./script2.sh
all:
#echo "foo=${foo}"
$
$ cat script1.sh
export test=bar
$
$ cat script2.sh
#!/usr/bin/env bash
echo "$test"
$
$ make
foo=bar
$
Note that script1.sh does not include any shebang because it's being sourced, and is therefore running in the calling shell, whatever that is. That makes the shebang line merely a comment. If you're on a system where the default shell is POSIX but not bash (like Ubuntu, Solaris, FreeBSD, etc), this should still work because POSIX shells should all understand the concept of exporting variables.
The two separate invocations of the scripts create two separate environments. The first script sets a variable in its environment and exits (the environment is lost). The second script does not have that variable in its environment, so it outputs an empty string.
You can not have environment variables pass between environments other than between the environments of a parent shell to its child shell (not the other way around). The variables passed over into the child shell are only those that the parent shell has export-ed. So, if the first script invoked the second script, the value would be outputted (if it was export-ed in the first script).
In a shell, you would source the first file to set the variables therein in the current environment (and then export them!). However, in Makefiles it's a bit trickier since there's no convenient source command.
Instead you may want to read this StackOverflow question.
EDIT in light of #ghoti's answer: #ghoti has a good solution, but I'll leave my answer in here as it explains a bit more verbosely about environment variables and what we can do and not do with them with regards to passing them between environments.
This is an attempt to simplify a problem I'm having.
I define a function which sets a variable and this works in this scenario:
$ function myfunc { res="ABC" ; }
$ res="XYZ"
$ myfunc
$ echo $res
ABC
So res has been changed by the call to myfunc. But:
$ res="XYZ"
$ myfunc | echo
$ echo $res
XYZ
So when myfunc is part of a pipe the value doesn't change.
How can I make myfunc work the way I desire even when a pipe is involved?
(In the real script "myfunc" does something more elaborate of course and the other side of the pipe has a zenity progress dialogue rather than a pointless echo)
Thanks
This isn't possible on Unix. To understand this better, you need to know what a variable is. Bash keeps two internal tables with all defined variables. One is for variables local to the current shell. You can create those with set name=value or just name=value. These are local to the process; they are not inherited when a new process is created.
To export a variable to new child processes, you must export it with export name. That tells bash "I want children to see the value of this variable". It's a security feature.
When you invoke a function in bash, it's executed within the context of the current shell, so it can access and modify all the variables.
But a pipe is a list of processes which are connected with I/O pipes. That means your function is executed in a shell and only the output of this shell is visible to echo.
Even exporting in myfunc wouldn't work because export works only for processes started by the shell where you did the export and echo was started by the same shell as myfunc:
bash
+-- myfunc
+-- echo
that is echo is not a child of myfunc.
Workarounds:
Write the variable into a file
Use a more complex output format like XML or several lines where the first line of output is always the variable and the real output comes in the next line.
As #Aaron said, the problem is caused by the function running in a subshell. But there is a way to avoid this in bash, using process substitution instead of a pipe:
myfunc > >(echo)
This does much the same thing a pipe would, but myfunc runs in the main shell rather than a subprocess. Note that this is a bash-only feature, and you must use #!/bin/bash as your shebang for this to work.
Lets say I have a shell / bash script named test.sh with:
#!/bin/bash
TESTVARIABLE=hellohelloheloo
./test2.sh
My test2.sh looks like this:
#!/bin/bash
echo ${TESTVARIABLE}
This does not work. I do not want to pass all variables as parameters since imho this is overkill.
Is there a different way?
You have basically two options:
Make the variable an environment variable (export TESTVARIABLE) before executing the 2nd script.
Source the 2nd script, i.e. . test2.sh and it will run in the same shell. This would let you share more complex variables like arrays easily, but also means that the other script could modify variables in the source shell.
UPDATE:
To use export to set an environment variable, you can either use an existing variable:
A=10
# ...
export A
This ought to work in both bash and sh. bash also allows it to be combined like so:
export A=10
This also works in my sh (which happens to be bash, you can use echo $SHELL to check). But I don't believe that that's guaranteed to work in all sh, so best to play it safe and separate them.
Any variable you export in this way will be visible in scripts you execute, for example:
a.sh:
#!/bin/sh
MESSAGE="hello"
export MESSAGE
./b.sh
b.sh:
#!/bin/sh
echo "The message is: $MESSAGE"
Then:
$ ./a.sh
The message is: hello
The fact that these are both shell scripts is also just incidental. Environment variables can be passed to any process you execute, for example if we used python instead it might look like:
a.sh:
#!/bin/sh
MESSAGE="hello"
export MESSAGE
./b.py
b.py:
#!/usr/bin/python
import os
print 'The message is:', os.environ['MESSAGE']
Sourcing:
Instead we could source like this:
a.sh:
#!/bin/sh
MESSAGE="hello"
. ./b.sh
b.sh:
#!/bin/sh
echo "The message is: $MESSAGE"
Then:
$ ./a.sh
The message is: hello
This more or less "imports" the contents of b.sh directly and executes it in the same shell. Notice that we didn't have to export the variable to access it. This implicitly shares all the variables you have, as well as allows the other script to add/delete/modify variables in the shell. Of course, in this model both your scripts should be the same language (sh or bash). To give an example how we could pass messages back and forth:
a.sh:
#!/bin/sh
MESSAGE="hello"
. ./b.sh
echo "[A] The message is: $MESSAGE"
b.sh:
#!/bin/sh
echo "[B] The message is: $MESSAGE"
MESSAGE="goodbye"
Then:
$ ./a.sh
[B] The message is: hello
[A] The message is: goodbye
This works equally well in bash. It also makes it easy to share more complex data which you could not express as an environment variable (at least without some heavy lifting on your part), like arrays or associative arrays.
Fatal Error gave a straightforward possibility: source your second script! if you're worried that this second script may alter some of your precious variables, you can always source it in a subshell:
( . ./test2.sh )
The parentheses will make the source happen in a subshell, so that the parent shell will not see the modifications test2.sh could perform.
There's another possibility that should definitely be referenced here: use set -a.
From the POSIX set reference:
-a: When this option is on, the export attribute shall be set for each variable to which an assignment is performed; see the Base Definitions volume of IEEE Std 1003.1-2001, Section 4.21, Variable Assignment. If the assignment precedes a utility name in a command, the export attribute shall not persist in the current execution environment after the utility completes, with the exception that preceding one of the special built-in utilities causes the export attribute to persist after the built-in has completed. If the assignment does not precede a utility name in the command, or if the assignment is a result of the operation of the getopts or read utilities, the export attribute shall persist until the variable is unset.
From the Bash Manual:
-a: Mark variables and function which are modified or created for export to the environment of subsequent commands.
So in your case:
set -a
TESTVARIABLE=hellohelloheloo
# ...
# Here put all the variables that will be marked for export
# and that will be available from within test2 (and all other commands).
# If test2 modifies the variables, the modifications will never be
# seen in the present script!
set +a
./test2.sh
# Here, even if test2 modifies TESTVARIABLE, you'll still have
# TESTVARIABLE=hellohelloheloo
Observe that the specs only specify that with set -a the variable is marked for export. That is:
set -a
a=b
set +a
a=c
bash -c 'echo "$a"'
will echo c and not an empty line nor b (that is, set +a doesn't unmark for export, nor does it “save” the value of the assignment only for the exported environment). This is, of course, the most natural behavior.
Conclusion: using set -a/set +a can be less tedious than exporting manually all the variables. It is superior to sourcing the second script, as it will work for any command, not only the ones written in the same shell language.
There's actually an easier way than exporting and unsetting or sourcing again (at least in bash, as long as you're ok with passing the environment variables manually):
let a.sh be
#!/bin/bash
secret="winkle my tinkle"
echo Yo, lemme tell you \"$secret\", b.sh!
Message=$secret ./b.sh
and b.sh be
#!/bin/bash
echo I heard \"$Message\", yo
Observed output is
[rob#Archie test]$ ./a.sh
Yo, lemme tell you "winkle my tinkle", b.sh!
I heard "winkle my tinkle", yo
The magic lies in the last line of a.sh, where Message, for only the duration of the invocation of ./b.sh, is set to the value of secret from a.sh.
Basically, it's a little like named parameters/arguments. More than that, though, it even works for variables like $DISPLAY, which controls which X Server an application starts in.
Remember, the length of the list of environment variables is not infinite. On my system with a relatively vanilla kernel, xargs --show-limits tells me the maximum size of the arguments buffer is 2094486 bytes. Theoretically, you're using shell scripts wrong if your data is any larger than that (pipes, anyone?)
In Bash if you export the variable within a subshell, using parentheses as shown, you avoid leaking the exported variables:
#!/bin/bash
TESTVARIABLE=hellohelloheloo
(
export TESTVARIABLE
source ./test2.sh
)
The advantage here is that after you run the script from the command line, you won't see a $TESTVARIABLE leaked into your environment:
$ ./test.sh
hellohelloheloo
$ echo $TESTVARIABLE
#empty! no leak
$
Adding to the answer of Fatal Error, There is one more way to pass the variables to another shell script.
The above suggested solution have some drawbacks:
using Export : It will cause the variable to be present out of their scope which is not a good design practice.
using Source : It may cause name collisions or accidental overwriting of a predefined variable in some other shell script file which have sourced another file.
There is another simple solution avaiable for us to use.
Considering the example posted by you,
test.sh
#!/bin/bash
TESTVARIABLE=hellohelloheloo
./test2.sh "$TESTVARIABLE"
test2.sh
#!/bin/bash
echo $1
output
hellohelloheloo
Also it is important to note that "" are necessary if we pass multiword strings.
Taking one more example
master.sh
#!/bin/bash
echo in master.sh
var1="hello world"
sh slave1.sh $var1
sh slave2.sh "$var1"
echo back to master
slave1.sh
#!/bin/bash
echo in slave1.sh
echo value :$1
slave2.sh
#!/bin/bash
echo in slave2.sh
echo value : $1
output
in master.sh
in slave1.sh
value :"hello
in slave2.sh
value :"hello world"
It happens because of the reasons aptly described in this link
Another option is using eval. This is only suitable if the strings are trusted. The first script can echo the variable assignments:
echo "VAR=myvalue"
Then:
eval $(./first.sh) ./second.sh
This approach is of particular interest when the second script you want to set environment variables for is not in bash and you also don't want to export the variables, perhaps because they are sensitive and you don't want them to persist.
Another way, which is a little bit easier for me is to use named pipes. Named pipes provided a way to synchronize and sending messages between different processes.
A.bash:
#!/bin/bash
msg="The Message"
echo $msg > A.pipe
B.bash:
#!/bin/bash
msg=`cat ./A.pipe`
echo "message from A : $msg"
Usage:
$ mkfifo A.pipe #You have to create it once
$ ./A.bash & ./B.bash # you have to run your scripts at the same time
B.bash will wait for message and as soon as A.bash sends the message, B.bash will continue its work.