For some reason after running the main script:
sudo bash main.sh
-> execution stops at the first diff redirected to file.
However, when I comment out the function name and parentheses and call the patching.sh directly - it works.
What is wrong with my script that when calling it in a form of a function from another file - it stops, but when called directly it works?
main.sh:
set -e
source $(dirname $0)/Scripts/patching.sh
# Overwrite files
update_files
patching.sh:
#!/bin/bash
function update_files() {
declare -r SW_DIR='Source/packages/'
CMP_FILE='file1.c'
diff -u ./$SW_DIR/examples/$CMP_FILE ./Source/$CMP_FILE > file.diff
cp -v ./Source/$CMP_FILE ./$SW_DIR/examples/$CMP_FILE
}
During my debugging - I added the -x option to set. This is what I see now:
+ declare -r SW_DIR=Source/packages
+ CMP_FILE=file1.c
+ diff -u ./Source/packages/examples/file1.c ./Source/file1.c
And that's the last line. If I omit the redirection operator - the diff is simply shown in the console and that's it. It does not proceed further, with no error message.
See What does set -e mean in a bash script? and BashFAQ/105 (Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?). Execution stops after the diff when set -e is in effect because diff exits with non-zero status when the files that it is comparing are different. This kind of behaviour is one of the downsides of using set -e. Follow the links for more, and useful, information.
Jenkins version: 2.164.3
I have my Jenkinsfile and in one of the stage, I'm calling the following code but it gives me an error.
I don't see why it's giving me an error when syntax wise, it looks ok.
Jenkinsfile code snapshot:
stages {
stage("Checkout Dependent Code") {
steps {
sh '''
set +x
echo -e "\n-- Checking out dependent code:"
echo -e "\n-- Cloning Common Code.\n"
git clone -b ${COMMON_REPO_BRANCH} ${SSH_GIT_URL}/project/common_utilities.git
## Comment 1. Old repo
echo -e "\n-- Cloning Exporter Tool\n"
git clone -b ${TOOL_REPO_BRANCH} ${SSH_GIT_URL}/project/jira-exporter.git
## Comment 2. New - 3 new repos. Comment the code for now.
#echo -e "\n-- Cloning Some Exporter Tool Repos\n"
#for r in core client extractor;
#do
# echo -e "\n -- Cloning: ${r}"
# git clone -b ${TOOL_REPO_BRANCH} ${SSH_GIT_URL}/project/${r}.git
# echo
#done
echo -e "\n\n`echo -en "\n-- Current Directory: "; pwd; echo; ls -l`\n\n"
'''
}
}
}
The error message that I'm getting is:
10:38:38 -- Checking out dependent code:
10:38:38
10:38:38 -- Cloning Common Code.
10:38:38
10:38:38 Cloning into 'common_utilities'...
10:38:39
10:38:39 -- Cloning Exporter Tool
10:38:39
10:38:39 Cloning into 'jira-exporter'...
10:38:39 /jenkins_workspaces/workspace/development/project/my_jenkins_job/251#tmp/durable-f88d4c2a/script.sh: line 21: --: command not found
Question:
The first comment (Comment 1 line) in the shell script logic, is honored by Jenkinsfile and no syntax issue there and we can see the output the following echo and git command works (I can see in my workspace, git repository has been cloned successfully).
Starting 2nd comment Comment 2. line onwards, the next few lines are all commented out in Shell logic but script fails for a line (which is commented out) somewhere before the last echo line where I'm printing Current Directory: .. line and this last echo line is not printed at all (as error happened before reaching this last echo line. If all lines before this last echo line were commented, then why did I get an error. Running the shell code (from a file) works fine on the machine.
So after some digging, I found this:
The problem is \n and how it's treated in Groovy and Shell code.
In Jenkinsfile (Groovy) code, when it reads the code for the above mentioned
stage (in the post), it says, oh I got this SHELL sh ''' .... ''' code and I'll go parse this in my own fancy Groovy world and create a dynamic temporary shell script (aka ..#tmp/durable-f88d4c2a/script.sh).
First Groovy comes in (due to Jenkinsfile written in Groovy) and breaks all statements wrapped in sh ''' ... ''' code into a new line and then, Jenkins creates a dynamic .sh Shell script (to do the work).
This works if a command which was using \n was NOT commented out, but doesn't work if the command using \n was ACUTUALLY COMMENTED out.
To investigate the issue, I added the following line inside sh ''' ... ''' section after set +x line.
I added cat -n $0 (just after set +x line, to print the whole shell dynamic temp .sh script) and to see HOW Jenkinsfile Groovy logic actually parsed this visual looking sh ''' ... ''' code in my Jenkinsfile ---> into a true Shell .sh script before executing it.
I found that, it treated/broke/parsed sh ''' ... ''' section; for example, a line like:
echo -e "\n-- Cloning Exporter Tool\n" into the following code lines (in the dynamically created .sh script, created by Jenkins):
echo -e " on a line in that dynamic temprarory .sh script
then
-- Cloning Exporter Tool in another line, then
" in another line.
Now, the above code works fine as " double quotes are maintained and -- actually has a valid echo command which uses this -- characters to print on console output.
When the line was:
#echo -e "\n-- Cloning Exporter Tool\n" into the following code:
Groovy parsed it into a dynamic temp .sh file as:
#echo -e " on a line in that dynamic temprarory .sh script
then
-- Cloning Exporter Tool in another line, then
" in another line.
and here it barfs as expected (as per shell logic) and there's no COMMAND (wrapper like echo as it's commented out) for -- ... to work. Thus, we got -- command not found
10:38:38 19 ## Comment 2. New - 3 new repos. Comment the code for now.
10:38:38 20 #echo -e "
10:38:38 21 -- Cloning Some Exporter Tool Repos
10:38:38 22 "
Zeeesus!
Conclusion: You must use \n carefully in sh ''' ... ''' section of Jenkinsfile, either for comments or real code lines otherwise, you'll see <some character/word> command not found error. Or, better call the above code lines via a script available in some version control tool (GIT) (rather than putting all those code lines in Jenkinsfile itself).
I'm not sure what the exact issue, but try rewriting the script using more well-defined commands like printf.
stages {
stage("Checkout Dependent Code") {
steps {
sh '''
set +x
printf '\n-- Checking out dependent code:\n'
printf '\n-- Cloning Common Code.\n\n'
git clone -b "${COMMON_REPO_BRANCH}" ${SSH_GIT_URL}/project/common_utilities.git
## Comment 1. Old repo
printf '\n-- Cloning Exporter Tool\n'
git clone -b "${TOOL_REPO_BRANCH}" "${SSH_GIT_URL}/project/jira-exporter.git"
## Comment 2. New - 3 new repos. Comment the code for now.
#echo -e "\n-- Cloning Some Exporter Tool Repos\n"
#for r in core client extractor;
#do
# echo -e "\n -- Cloning: ${r}"
# git clone -b ${TOOL_REPO_BRANCH} ${SSH_GIT_URL}/project/${r}.git
# echo
#done
printf '\n\n\n-- Current Directory '
pwd
printf '\n'
ls -l
printf '\n\n'
'''
}
}
}
Specifics:
I'm trying to build a bash script which needs to do a couple of things.
Firstly, it needs to run a third party script that I cannot manipulate. This script will build a project and then start a node server which outputs data to the terminal continually. This process needs to continue indefinitely so I can't have any exit codes.
Secondly, I need to wait for a specific line of output from the first script, namely 'Started your app.'.
Once that line has been output to the terminal, I need to launch a separate set of commands, either from another subscript or from an if or while block, which will change a few lines of code in the project that was built by the first script to resolve some dependencies for a later step.
So, how can I capture the output of the first subscript and use that to run another set of commands when a particular line is output to the terminal, all while allowing the first script to run in the terminal, and without using timers and without creating a huge file from the output of subscript1 as it will run indefinitely?
Pseudo-code:
#!/usr/bin/env bash
# This script needs to stay running & will output to the terminal (at some point)
# a string that we need to wait/watch for to launch subscript2
sh subscript1
# This can't run until subscript1 has output a particular string to the terminal
# This could be another script, or an if or while block
sh subscript2
I have been beating my head against my desk for hours trying to get this to work. Any help would be appreciated!
I think this is a bad idea — much better to have subscript1 changed to be automation-friendly — but in theory you can write:
sh subscript1 \
| {
while IFS= read -r line ; do
printf '%s\n' "$line"
if [[ "$line" = 'Started your app.' ]] ; then
sh subscript2 &
break
fi
done
cat
}
I'm new to Unix...I have a shell script that calls sqlplus. I have some variables that are defined within the code. However, I do not feel comfortable having the password displayed within the script. I would appreciate if someone could show me ways on how to hide my password.
One approach I know of is to omit the password and sqlplus will
prompt you for the password.
An approach that I will very much be interested in is a linux
command whose output can be passed into the password variable. That
way, I can replace easily replace "test" with some parameter.
Any other approach.
Thanks
#This is test.sh It executes sqlplus
#!/bin/sh
export user=TestUser
export password=test
# Other variables have been ommited
echo ----------------------------------------
echo Starting ...
echo ----------------------------------------
echo
sqlplus $user/$password
echo
echo ----------------------------------------
echo finish ...
echo ----------------------------------------
You can pipe the password to the sqlplus command:
echo ${password} | sqlplus ${user}
tl;dr: passwords on the command line are prone to exposure to hostile code and users. don't do it. you have better options.
the command line is accessible using $0 (the command itself) through ${!#} ($# is the number of arguments and ${!name} dereferences the value of $name, in this case $#).
you may simply provide the password as a positional argument (say, first, or $1), or use getopts(1), but the thing is passwords in the arguments array is a bad idea. Consider the case of ps auxww (displays full command lines of all processes, including those of other users).
prefer getting the password interactively (stdin) or from a configuration file. these solutions have different strengths and weaknesses, so choose according to the constraints of your situation. make sure the config file is not readable by unauthorized users if you go that way. it's not enough to make the file hard to find btw.
the interactive thing can be done with the shell builtin command read.
its description in the Shell Builtin Commands section in bash(1) includes
-s Silent mode. If input is coming from a terminal, characters are not echoed.
#!/usr/bin/env bash
INTERACTIVE=$([[ -t 0 ]] && echo yes)
if ! IFS= read -rs ${INTERACTIVE+-p 'Enter password: '} password; then
echo 'received ^D, quitting.'
exit 1
fi
echo password="'$password'"
read the bash manual for explanations of other constructs used in the snippet.
configuration files for shell scripts are extremely easy, just source ~/.mystuffrc in your script. the configuration file is a normal shell script, and if you limit yourself to setting variables there, it will be very simple.
for the description of source, again see Shell Builtin Commands.
I have two shell scripts, a.sh and b.sh.
How can I call b.sh from within the shell script a.sh?
There are a couple of different ways you can do this:
Make the other script executable with chmod a+x /path/to/file(Nathan Lilienthal's comment), add the #!/bin/bash line (called shebang) at the top, and the path where the file is to the $PATH environment variable. Then you can call it as a normal command;
Or call it with the source command (which is an alias for .), like this:
source /path/to/script
Or use the bash command to execute it, like:
/bin/bash /path/to/script
The first and third approaches execute the script as another process, so variables and functions in the other script will not be accessible.
The second approach executes the script in the first script's process, and pulls in variables and functions from the other script (so they are usable from the calling script).
In the second method, if you are using exit in second script, it will exit the first script as well. Which will not happen in first and third methods.
Check this out.
#!/bin/bash
echo "This script is about to run another script."
sh ./script.sh
echo "This script has just run another script."
There are a couple of ways you can do this. Terminal to execute the script:
#!/bin/bash
SCRIPT_PATH="/path/to/script.sh"
# Here you execute your script
"$SCRIPT_PATH"
# or
. "$SCRIPT_PATH"
# or
source "$SCRIPT_PATH"
# or
bash "$SCRIPT_PATH"
# or
eval '"$SCRIPT_PATH"'
# or
OUTPUT=$("$SCRIPT_PATH")
echo $OUTPUT
# or
OUTPUT=`"$SCRIPT_PATH"`
echo $OUTPUT
# or
("$SCRIPT_PATH")
# or
(exec "$SCRIPT_PATH")
All this is correct for the path with spaces!!!
The answer which I was looking for:
( exec "path/to/script" )
As mentioned, exec replaces the shell without creating a new process. However, we can put it in a subshell, which is done using the parantheses.
EDIT:
Actually ( "path/to/script" ) is enough.
If you have another file in same directory, you can either do:
bash another_script.sh
or
source another_script.sh
or
. another_script.sh
When you use bash instead of source, the script cannot alter environment of the parent script. The . command is POSIX standard while source command is a more readable bash synonym for . (I prefer source over .). If your script resides elsewhere just provide path to that script. Both relative as well as full path should work.
Depends on.
Briefly...
If you want load variables on current console and execute you may use source myshellfile.sh on your code. Example:
#!/bin/bash
set -x
echo "This is an example of run another INTO this session."
source my_lib_of_variables_and_functions.sh
echo "The function internal_function() is defined into my lib."
returned_value=internal_function()
echo $this_is_an_internal_variable
set +x
If you just want to execute a file and the only thing intersting for you is the result, you can do:
#!/bin/bash
set -x
./executing_only.sh
bash i_can_execute_this_way_too.sh
bash or_this_way.sh
set +x
You can use /bin/sh to call or execute another script (via your actual script):
# cat showdate.sh
#!/bin/bash
echo "Date is: `date`"
# cat mainscript.sh
#!/bin/bash
echo "You are login as: `whoami`"
echo "`/bin/sh ./showdate.sh`" # exact path for the script file
The output would be:
# ./mainscript.sh
You are login as: root
Date is: Thu Oct 17 02:56:36 EDT 2013
First you have to include the file you call:
#!/bin/bash
. includes/included_file.sh
then you call your function like this:
#!/bin/bash
my_called_function
Simple source will help you.
For Ex.
#!/bin/bash
echo "My shell_1"
source my_script1.sh
echo "Back in shell_1"
Just add in a line whatever you would have typed in a terminal to execute the script!
e.g.:
#!bin/bash
./myscript.sh &
if the script to be executed is not in same directory, just use the complete path of the script.
e.g.:`/home/user/script-directory/./myscript.sh &
This was what worked for me, this is the content of the main sh script that executes the other one.
#!/bin/bash
source /path/to/other.sh
The top answer suggests adding #!/bin/bash line to the first line of the sub-script being called. But even if you add the shebang, it is much faster* to run a script in a sub-shell and capture the output:
$(source SCRIPT_NAME)
This works when you want to keep running the same interpreter (e.g. from bash to another bash script) and ensures that the shebang line of the sub-script is not executed.
For example:
#!/bin/bash
SUB_SCRIPT=$(mktemp)
echo "#!/bin/bash" > $SUB_SCRIPT
echo 'echo $1' >> $SUB_SCRIPT
chmod +x $SUB_SCRIPT
if [[ $1 == "--source" ]]; then
for X in $(seq 100); do
MODE=$(source $SUB_SCRIPT "source on")
done
else
for X in $(seq 100); do
MODE=$($SUB_SCRIPT "source off")
done
fi
echo $MODE
rm $SUB_SCRIPT
Output:
~ ❯❯❯ time ./test.sh
source off
./test.sh 0.15s user 0.16s system 87% cpu 0.360 total
~ ❯❯❯ time ./test.sh --source
source on
./test.sh --source 0.05s user 0.06s system 95% cpu 0.114 total
* For example when virus or security tools are running on a device it might take an extra 100ms to exec a new process.
pathToShell="/home/praveen/"
chmod a+x $pathToShell"myShell.sh"
sh $pathToShell"myShell.sh"
#!/bin/bash
# Here you define the absolute path of your script
scriptPath="/home/user/pathScript/"
# Name of your script
scriptName="myscript.sh"
# Here you execute your script
$scriptPath/$scriptName
# Result of script execution
result=$?
chmod a+x /path/to/file-to-be-executed
That was the only thing I needed. Once the script to be executed is made executable like this, you (at least in my case) don't need any other extra operation like sh or ./ while you are calling the script.
Thanks to the comment of #Nathan Lilienthal
Assume the new file is "/home/satya/app/app_specific_env" and the file contents are as follows
#!bin/bash
export FAV_NUMBER="2211"
Append this file reference to ~/.bashrc file
source /home/satya/app/app_specific_env
When ever you restart the machine or relogin, try echo $FAV_NUMBER in the terminal. It will output the value.
Just in case if you want to see the effect right away, source ~/.bashrc in the command line.
There are some problems to import functions from other file.
First: You needn't to do this file executable. Better not to do so!
just add
. file
to import all functions. And all of them will be as if they are defined in your file.
Second: You may be define the function with the same name. It will be overwritten. It's bad. You may declare like that
declare -f new_function_name=old_function_name
and only after that do import.
So you may call old function by new name.
Third: You may import only full list of functions defined in file.
If some not needed you may unset them. But if you rewrite your functions after unset they will be lost. But if you set reference to it as described above you may restore after unset with the same name.
Finally In common procedure of import is dangerous and not so simple. Be careful! You may write script to do this more easier and safe.
If you use only part of functions(not all) better split them in different files. Unfortunately this technique not made well in bash. In python for example and some other script languages it's easy and safe. Possible to make partial import only needed functions with its own names. We all want that in next bush versions will be done the same functionality. But now We must write many additional cod so as to do what you want.
Use backticks.
$ ./script-that-consumes-argument.sh `sh script-that-produces-argument.sh`
Then fetch the output of the producer script as an argument on the consumer script.