I have functions to logInfo()/logError() in a shell script (logger.sh). There are other shell scripts (example: createuser.sh) which require logging. How to invoke functions like logInfo() from createuser.sh
Without function invocations, these logInfo/logError functions are getting copied to every shell script that requires logging.
Put your logger functions in a separate file (that only contains functions, no commands), say myfuncs.sh. Then in any other script that needs these functions, somewhere near the top of that script add a line:
. myfuncs.sh
or, equivalently:
source myfuncs.sh
The functions in myfuncs.sh will then be available in that script.
If the only thing in the logger.sh script is the functions (ie: nothing runsfid you execute it from the command line, then you can source the shell script by including the line:
. logger.sh
See: https://ss64.com/bash/source.html
Related
I'm trying to run a zsh/bash script that writes several values to environment variables, I want these variables available to the parent shell as they are required for several tools we use. If I manually run the script using '. myscript myparamater' I get the expected result, however defining a zsh function to do this without having to use dot notation does not result in the variables being set.
I'm pretty new to zsh/bash scripting, and this has been my first real effort at writing something useful. I am running this on MacOS but would like for it to work in Linux as well, the script my function is sourcing is doing some bash logic and in some cases also executing a third-party executable (really a bash script that calls a java binary). In my script I'm calling the third-party tool directly using its executable name, calling it using exec or dot notation does not seem to work properly.
zsh Function:
function okta-auth {
. okta_setprofile $1
}
My script:
https://gist.github.com/ckizziar/a60a84a6a148a8fd7b0ef536409352d3
Using '. okta_setprofile myprofile' I receive the expected output of the okta_setprofile script, four environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION, and AWS_SESSION_TOKEN) are set in my shell.
Using 'okta-auth myprofile' the script feedback is the same as previously, however after execution, the environment variables are not set or updated.
Updated 20190206 to show flow
okta_setprofile flow diagam
I have a functions.sh script with a bunch of global functions that i want to use in other scripts. this functions script is written in bash (#!/bin/bash)
Those many scripts had been written over the years, so the older ones or with the #!/bin/sh (which is different from #!/bin/bash in solaris).
My question here is, when you call the functions.sh file (with . /path/to/functions.sh) from within a sh (not bash) script, is the shebang line of "functions.sh" interpreted ?
In a nutshell, can you call a bash written function script from another shell-type script (with proper shebang lines in both) ?
Thanks !
As long as you want to use the function you need to source the scripts and not execute it
source /path/to/functions.sh
or as per the POSIX standards, do
. ./path/to/functions.sh
from within the sh script, which is equivalent to including the contents of function.sh in the file at the point where the command is run.
You need to understand the difference between sourcing and executing a script.
Sourcing runs the script from the parent-shell in which the script is
invoked; all the environment variables, functions are retained until the
parent-shell is terminated (the terminal is closed, or the variables
are reset or unset),
Execute forks a new shell from the parent shell and those variables,functions
including your export variables are retained only in the sub-shell's
environment and destroyed at the end of script termination.
When you source a file, the shebang in that file is ignored (it is not on the first line since it is included in the caller script and is seen as comment).
When you include an old script with #!/bin/sh it will be handled as the shell of the caller. Most things written in /bin/sh will work in bash.
When you are running a sh or ksh script and you include (source) a bash file, all bash specific code will give problems.
I've a master shell script which calls functions defined in various other shell scripts. The master script includes other scripts using 'source' command.
I want to use a common interpreter for all the scripts regardless of what the she bang ("#!/bin/sh") has been set to in those scripts. I want to supply that interpreter from command line.
for example:
master.sh (with #!/bin/sh)
subscript1.sh (with #!/bin/sh)
subscript2.sh (with #!/bin/sh)
subscript3.sh (with #!/bin/sh)
master.sh calls functions which are defined in the subscripts and are included as 'source subscript1.sh', 'source subscript2.sh' and 'source subscript3.sh'.
When I run ./master.sh, the subscript use their respective interpreters as directed by "#!/bin/sh" line. I want to run all of them using '/bin/bash', the master and the subscripts but without changing the she bang line because there are a lot of such scripts. Is there any way to do this?
Call the interpreter explicitly:
bash ./master.sh
Note that the shebang line has no effect on scripts run using source. That command always executes the script in the current shell process, so it uses whatever interpreter is currently running.
But this all seems dangerous. If someone writes #!/bin/sh instead of #!/bin/bash, it may have dependencies on sh syntax that would be violated if bash were used instead.
I have noticed some weird behavior when sourcing another script within my shell script. The script that I am sourcing to setup the environment in my shell script takes an optional argument, e.g.
source setup.sh version1
However in my shell script I have also have command line argument variables. For example:
./myscript.sh TEST 1
Inside myscript.sh:
#!/bin/zsh
source setup.sh
echo ROOT version setup $ROOT_SYS
...more of the script
The problem that I have noticed with my script above is that the $1 argument (TEST in this example) is used in the source setup.sh command. This causes the command to become
source setup.sh TEST
which of course fails as setup.sh does not have a version TEST.
I solved this problem by editing my script to below.
#!/bin/zsh
source setup.sh version1
echo ROOT version setup $ROOT_SYS
...more of the script
The source command now does not pick up the $1 argument.
Why/How does the source command pick up the $1 argument when I am running my shell script?
Historically, unix shells didn't allow any arguments to be passed to scripts called with the . built-in (source is an alias of . available in bash, ksh and zsh). The . built-in means “act as if this file was actually included here”.
In bash, ksh and zsh, if you pass extra arguments to the . built-in, they become positional parameters ($1 and so on) in the sourced script. If you pass zero arguments, the positional parameters of the main script remain in effect. In those shells, . behaves rather like calling a function, though not perfectly so (in particular, in bash, if you modify the positional parameters inside the sub-script, the modification is seen by the main script).
A simple way of avoiding this kind of difficulty is to only ever define functions (and perhaps variables) in the subscript. Treat it as a code library, such that merely sourcing it has no effect, and then call functions from the sub-script to actually do something.
This is because source executes the code of setup.sh as if it was in place, so when setup.sh access, say, $1, the value it has is that of the first argument of the actual script. If you want to avoid that you could either execute it:
setup.sh
or, if you need to get back some variables or values from it, change it to return the result in form of an output, something like:
ROOT_SYS=`setup.sh`
Finally, as you figured out, the source keywords also allows providing arguments to the scripts, but it bypasses current arguments if you don't provide any.
I've got a function that I want to reference and use across different scripts. Is there any way to do this? I don't want to be re-writing the same function for different scripts. Thanks.
Sure - in your script, where you want to use the function, you can write a command like
source function.sh
which is equivalent to including the contents of function.sh in the file at the point where the command is run. Note that function.sh needs to be in one of the directories in $PATH; if it's not, you need to specify an absolute path.
Yes, you can localize all your functions in a common file (or files). This is exactly what I do with all my utility functions. I have a single utility.shinc in my home directory that's used by all my programs with:
. $HOME/utility.shinc
which executes the script in the context of the current shell. This is important - if you simply run the include script, it will run in a subshell and any changes will not propagate to the current shell.
You can do the same thing for groups of scripts. If it's part of a "product", I'd tend to put all the scripts, and any included scripts, in a single shell directory to ensure everything is localized.
Yes..you can!
Add source function_name in your script.
I prefer to create variable eg.VAR=$(funtion_name),if you add the source function_name after #!/bin/bash then your script first execute imported function task and then your current script task so its better to create variable and used anywhere in script.
thank you..Hope its work for you:)