I am using global variable as INSTALL_DIR='/tmp' and then am calling function to use that variable in shell script
Which is the correct method from the following to use the variable?
Method 1:-
INSTALL_DIR='/tmp'
install_app() {
echo "application path - $INSTALL_DIR"
}
install_app
Method 2:-
INSTALL_DIR='/tmp'
install_app() {
app=$1
echo "application path - $app"
}
install_app $INSTALL_DIR
If you want to use variable in multiple places with in same script, then first approach is better.
But if variable will be used in different scripts , then you have to Export it (export will make it environment variable) in first script before using in second script. Refer to this link for exporting from one script to other.
Pass all variables from one shellscript to another?
For different scripts, use extra one-dot (.) to run . ./myscript.sh, it will execute within same scope.
Related
I'm trying to make variables exits outside of a shell script without using source.
The variables are declared in shell script with
export varA=3
and I run the script with ./filename.sh
I want
echo $varA
in the terminal to return 3 (i.e. the value of varA). So extend the scope of the variable to outside of the script
To sum up: how do I make the variables inside a shell script exist outside.
Thank you in advance
You can run your script on this way:
. ./filename.sh
This mean when run it will not spawn new shell but run it in current. And variables you set in your script will be available in your shell. This is kind of "source" as mentioned in comments.
I want to send the path to a bash script which sources some environmental variables as an argument to another bash script to run it and use the environmental variables. It works well with no arguments if I hard coded the path to the bash script to run it works and I can retrieve the environmental variables in the main script. the problem happens when I send the path as an argument it does not want to run it.
for example if the path is /path/script.bash and I send the path as an argument I get the error that /path/env_set: No such a file or directory
I run the script by this line
. $1 (this doesn't work)
. /path/script.bash (this works)
if I use
bash -c $1
the bash file runs but it does not set the environmental variables to use it in the main script
I don't know why env_set replaces the script name when I use arguments. Is there any approach to achieve this or any work around to achieve my goal?
It sounds like the problem could be either with your quoting, or with relative paths.
Quoting isn't just about spaces, it's also about pathname expansion (ie. []?* characters).
Do
. "$1"
(not . $1)
And remember, if you're giving a relative path for the environment script (or that script uses some relative paths), you will have a problem. Those paths are relative to the pwd - which is wherever you happen to be when you execute the main script (not where any of the script files themselves happen to be located, for example).
Finally, you can debug this problem by throwing echo at the start, and running the script (if it's safe to do that):
echo . "$1"
exit # Add exit here if you don't want to run w/o the vars.
Now you can see what you're actually trying to source.
In script 1, in your main code, you can call and run script 2,
. ./script 2
The first . stands for current shell, and the second . for current directory.
which will create the environment variables for you, and configure any other settings as well in the same terminal.
Afterwards when script 2 has finished running, script 1 would continue to run, and your environment variables which was created in script 2, will be accessible for script 1 to use in the same session.
I want to use the parameters that we define in the Jenkins job as arguments to the shell commands in the same job.
I have created a parameterized build with the following parameters:
high.version: 234
low.version: 220
I want to use these variables as arguments for the build's shell script:
/bin/bash /hai/mycode/scripts/run_script.sh high.version
How do I these parameters in the same job?
Jenkins will create environment variables with the parameters' names.
The caveat here is that Jenkins will also do that for parameters that do not represent valid variable names -- those are difficult to access in bash. This is the case in your example, as bash variable names must not contain the . character.
The easiest solution is that you
rename your parameters, e.g. to high_version and low_version (which are valid bash variable names)
then use the corresponding variable names when calling your script
Example:
/bin/bash /hai/mycode/scripts/run_script.sh "$high_version"
If you cannot rename parameters to represent valid bash variable names (e.g., for usability reasons: Jenkins presents variable names to end users in the Web form for starting a build): you can still access such parameters by grepping for the parameter name in the output of the env command.
What really helped me was Hudson: How to pass parameters to shell script
Solution: the variables are UPPERCASE even you define them in lowercase!
Use following syntax to pass jenkins parameter to shell script -
eg. YourScript.sh %JENKINS_PARAMETER%
after that in your script,you can use that parameter like normal shell script command line parameter.
eg. myParam = $1;
Have you try this?
echo "function hello() { " > gg.sh
echo "echo \$1">> gg.sh
echo "}" >> gg.sh
echo "hello \$1" >> gg.sh
chmod 777 gg.sh
./gg.sh $hello_version
Be careful of the variable name, dot is not that well supported, for detail, you can ref this.
https://issues.jenkins-ci.org/browse/JENKINS-7180
It is not a good practice to have dot(.) in your parameters. You should either choose highVersion OR high_version as your param names.
As per your question, it seems that you're working with a Freestyle job but many devs coming here would also be interested in the Pipeline syntax as well, so I'm giving a solution to use params in Jenkins pipeline DSL.
There are two ways you can use Jenkins parameters in the Jenkins Pipeline shell script -
As a Shell parameter
stage('Test'){
sh "/bin/bash /hai/mycode/scripts/run_script.sh $highVersion"
}
As a Groovy parameter
stage('Test'){
sh "/bin/bash /hai/mycode/scripts/run_script.sh ${params.highVersion}"
}
I would recommend to use a second method, as we're using groovy as a pipeline DSL.
Is it possible to create a local variable in a script, but without using a function?
I saw that we can't just use local variab=1.
Any solution?
All shell variables which are not marked as exported are local to the shell they are created in. Exported variables are copied into subshells; strictly speaking, they are not really shared with the subshells because changes are still local to the shell in which the change is made.
However, some shell constructs are not subshells. In particular, function execution does not cause a subshell to be created, and neither does the source/. built-in. In functions (but not in sourced files) you can make a variable local by declaring it to be local.
If you want to make a shell variable local to a block within a script, you can create a subshell using the (…) syntax:
a=outside
(
# This is a subshell, so the following is local
a=inside
echo $a
)
# Back to the outer shell
echo $a
{
# This is **not** a subshell, so the following affects the outer a
a=braced
}
# Here, a has changed
echo $a
It depends what you mean by local.
If you mean local to a function, obviously that makes no sense without a function.
But if you mean local to the script, then
variable=value
creates a variable not visible in the environment of processes you start, unless you export it.
Furthermore, you can create subshells within one shell script (e.g. using ()) and again in there, that creates a new process so variables will be local to that process.
I've got a function that I want to reference and use across different scripts. Is there any way to do this? I don't want to be re-writing the same function for different scripts. Thanks.
Sure - in your script, where you want to use the function, you can write a command like
source function.sh
which is equivalent to including the contents of function.sh in the file at the point where the command is run. Note that function.sh needs to be in one of the directories in $PATH; if it's not, you need to specify an absolute path.
Yes, you can localize all your functions in a common file (or files). This is exactly what I do with all my utility functions. I have a single utility.shinc in my home directory that's used by all my programs with:
. $HOME/utility.shinc
which executes the script in the context of the current shell. This is important - if you simply run the include script, it will run in a subshell and any changes will not propagate to the current shell.
You can do the same thing for groups of scripts. If it's part of a "product", I'd tend to put all the scripts, and any included scripts, in a single shell directory to ensure everything is localized.
Yes..you can!
Add source function_name in your script.
I prefer to create variable eg.VAR=$(funtion_name),if you add the source function_name after #!/bin/bash then your script first execute imported function task and then your current script task so its better to create variable and used anywhere in script.
thank you..Hope its work for you:)