I have a bash script which works like this;
File structure;
get.sh
loop.sh
config/param1.conf
config/param2.conf
Usage of the main script, get.sh;
./get.sh <param> i.e ./get.sh param1, ./get.sh param2
So when you run the script with specific params it fetches the config files from config/<param>.conf
What I'm trying to do is to run this second script, ./loop.sh so it runs the ./get.sh <param> for you in a loop using the params inside config folder, without .conf extensions.
Here's my loop.sh;
#!/bin/bash
# run the script with the first param you found inside ~/config/
# folder without including it's .conf extension,
# wait for 5 seconds and then do the same with the 2nd param you found
for i in $(find ~/config -name '*.conf'); do
./get.sh $(basename $i) | cut -d'.' -f 1
sleep 5
done
but this one is just displaying the params inside config folder and doing nothing else.
`
Inside of the config/param1.conf;
var=Hello1
Inside of the config/param2.conf;
var=Hello2
Inside of the get.sh;
#!/bin/bash
function testFunction {
echo "$var"
}
cfg_file=$1
if [ -f "$cfg_file" ]; then
. "$cfg_file"
testFunction $1
exit 1
else
echo "$1.conf doesn't exist"
exit 1
fi
So after all, when you run the loop.sh, the expected behavior should be printing the Hello1 and Hello2 strings into shell.
How can I fix this?
In loop (using .sh extensions for bash scripts is not great practice):
#!/bin/bash
for i in ~/config/*.conf; do
i_basename=${i##*/} # change ~/config/foo.conf to just foo.conf
i_basename=${i_basename%.conf} # change foo.conf to just foo
./get "$i_basename"
sleep 5
done
The ${var##prefix} and ${var%suffix} syntax is parameter expansion; with ## it removes the longest match from the beginning (so for */, everything up to the last /); with %, it removes the shortest successful match starting from the end.
In get:
#!/bin/bash
# Using POSIX function syntax; see http://wiki.bash-hackers.org/scripting/obsolete
testFunction() {
echo "$var"
}
cfg_file="config/$1.conf"
if [ -f "$cfg_file" ]; then
. "$cfg_file"
testFunction
else
echo "$cfg_file doesn't exist"
exit 1
fi
With respect to lack of .sh extensions -- UNIX commands don't conventionally have extensions (you run ls, not ls.elf); similarly, when you install a Python module built with setuptools, it doesn't put .py extensions on shims it creates in /usr/local/bin, even if the libraries those executable shims invoke do have such extensions. Moreover, bash and POSIX sh are two different languages: Bash scripts often don't work correctly when started with sh some-bash-only-script.sh (as unlike bash, sh isn't guaranteed to support language features like arrays), but the extension implies that they will.
-name expects the parameter to be just a filename, not a whole pathname. You should use the directory as a regular argument to find, not as part of -name.
for i in $(find ~/config -name '*.conf'); do
Don't use basename, you should pass the entire pathname to script.sh.
Then in script.sh you should should use $1 as the whole path to the config file, rather than concatenating it with a directory prefix.
cfg_file=$1
I don't see any point in this:
case $1 in
$1) cfg=$1 ;;
esac
The case will always be true, how can $1 not match $1? Remove that code.
I have a bash script that takes a file and performs an operation with this file. During the operation the out_file is produced. When it's done, I start the other script (script_2) into my script to perform another operation on the out_file. But the problem that I have is to pass parameters to the script_2, which are different for each initial file:
#/bin/bash
for i in $(ls folder); do
.\*operation*.sh folder/$i # this step produces the *out_file.$i*
.\script_2 *out_file.$i* parameter_1 parameter_2
done
Thus, the parameter_1 and parameter_2 should be different for each out_file. So, is it possible to pass different parameters every time inside the loop and don't launch the script_2 separately, every time for each file?
without any more information it's hard to know what your purpose is:
$ ls
script1.sh script2.sh script3.sh testfiles
ls ./testfiles/
file1.txt file2.txt
$ cat script1.sh
#!/bin/bash
for i in $(ls ./testfiles/); do
./script2.sh $i
./script3.sh ./testfiles/out_file.$i parameter_1 parameter_2
done
$ cat script2.sh
#!/bin/bash
touch ./testfiles/out_$1.txt
exit
$ cat script3.sh
#!/bin/bash
echo "dollar1: $1
dollar2 $2
dollar3 $3 "
$ ./script1.sh
dollar1: ./testfiles/out_file.file1.txt
dollar2 parameter_1
dollar3 parameter_2
dollar1: ./testfiles/out_file.file2.txt
dollar2 parameter_1
dollar3 parameter_2
$ ls ./testfiles/
file1.txt file2.txt out_file1.txt.txt out_file2.txt.txt
As you can see it loops through all files in the folder, creates the out file and then passes this into script 3.
I wouldn't advise you run the script again in the current format (it'll loop through the out files then) but you get the idea.
We now to find the directory of a shell script using dirname and $0, but this doesn't work when the script is inluded in another script.
Suppose two files first.sh and second.sh:
/tmp/first.sh :
#!/bin/sh
. "/tmp/test/second.sh"
/tmp/test/second.sh :
#!/bin/sh
echo $0
by running first.sh the second script also prints first.sh. How the code in second.sh can find the directory of itself? (Searching for a solution that works on bash/csh/zsh)
There are no solution that will work equally good in all flavours of shells.
In bash you can use BASH_SOURCE:
$(dirname "$BASH_SOURCE")
Example:
$ cat /tmp/1.sh
. /tmp/sub/2.sh
$ cat /tmp/sub/2.sh
echo $BASH_SOURCE
$ bash /tmp/1.sh
/tmp/sub/2.sh
As you can see, the script prints the name of 2.sh,
although you start /tmp/1.sh, that includes 2.sh with the source command.
I must note, that this solution will work only in bash. In Bourne-shell (/bin/sh) it is impossible.
In csh/tcsh/zsh you can use $_ instead of BASH_SOURCE.
Is it possible to save last entered value of a variable by the user in the bash script itself so that I reuse value the next time while executing again?.
Eg:
#!/bin/bash
if [ -d "/opt/test" ]; then
echo "Enter path:"
read path
p=$path
else
.....
........
fi
The above script is just a sample example I wanted to give(which may be wrong), is it possible if I want to save the value of p permanently in the script itself to so that I use it somewhere later in the script even when the script is re-executed?.
EDIT:
I am already using sed to overwrite the lines in the script while executing, this method works but this is not at all good practice as said. Replacing the lines in the same file as said in the below answer is much better than what I am using like the one below:
...
....
PATH=""; #This is line no 7
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )";
name="$(basename "$(test -L "$0" && readlink "$0" || echo "$0")")";
...
if [ condition ]
fi
path=$path
sed -i '7s|.*|PATH='$path';|' $DIR/$name;
Someting like this should do the asked stuff :
#!/bin/bash
ENTERED_PATH=""
if [ "$ENTERED_PATH" = "" ]; then
echo "Enter path"
read path
ENTERED_PATH=$path
sed -i 's/ENTERED_PATH=""/ENTERED_PATH='$path'/g' $0
fi
This script will ask user a path only if not previously ENTERED_PATH were defined, and store it directly into the current file with the sed line.
Maybe a safer way to do this, would be to write a config file somewhere with the data you want to save and source it . data.saved at the begining of your script.
In the script itself? Yes with sed but it's not advisable.
#!/bin/bash
test='0'
echo "test currently is: $test";
test=`expr $test + 1`
echo "changing test to: $test"
sed -i "s/test='[0-9]*'/test='$test'/" $0
Preferable method:
Try saving the value in a seperate file you can easily do a
myvar=`cat varfile.txt`
And whatever was in the file is not in your variable.
I would suggest using the /tmp/ dir to store the file in.
Another option would be to save the value as an extended attribute attached to the script file. This has many of the same problems as editing the script's contents (permissions issues, weird for multiple users, etc) plus a few of its own (not supported on all filesystems...), but IMHO it's not quite as ugly as rewriting the script itself (a config file really is a better option).
I don't use Linux, but I think the relevant commands would be something like this:
path="$(getfattr --only-values -n "user.saved_path" "${BASH_SOURCE[0]}")"
if [[ -z "$path" ]]; then
read -p "Enter path:" path
setfattr -n "user.saved_path" -v "$path" "${BASH_SOURCE[0]}"
fi
The way you would normally include a script is with "source"
eg:
main.sh:
#!/bin/bash
source incl.sh
echo "The main script"
incl.sh:
echo "The included script"
The output of executing "./main.sh" is:
The included script
The main script
... Now, if you attempt to execute that shell script from another location, it can't find the include unless it's in your path.
What's a good way to ensure that your script can find the include script, especially if for instance, the script needs to be portable?
I tend to make my scripts all be relative to one another.
That way I can use dirname:
#!/bin/sh
my_dir="$(dirname "$0")"
"$my_dir/other_script.sh"
I know I am late to the party, but this should work no matter how you start the script and uses builtins exclusively:
DIR="${BASH_SOURCE%/*}"
if [[ ! -d "$DIR" ]]; then DIR="$PWD"; fi
. "$DIR/incl.sh"
. "$DIR/main.sh"
. (dot) command is an alias to source, $PWD is the Path for the Working Directory, BASH_SOURCE is an array variable whose members are the source filenames, ${string%substring} strips shortest match of $substring from back of $string
An alternative to:
scriptPath=$(dirname $0)
is:
scriptPath=${0%/*}
.. the advantage being not having the dependence on dirname, which is not a built-in command (and not always available in emulators)
If it is in the same directory you can use dirname $0:
#!/bin/bash
source $(dirname $0)/incl.sh
echo "The main script"
I think the best way to do this is to use the Chris Boran's way, BUT you should compute MY_DIR this way:
#!/bin/sh
MY_DIR=$(dirname $(readlink -f $0))
$MY_DIR/other_script.sh
To quote the man pages for readlink:
readlink - display value of a symbolic link
...
-f, --canonicalize
canonicalize by following every symlink in every component of the given
name recursively; all but the last component must exist
I've never encountered a use case where MY_DIR is not correctly computed. If you access your script through a symlink in your $PATH it works.
A combination of the answers to this question provides the most robust solution.
It worked for us in production-grade scripts with great support of dependencies and directory structure:
#!/bin/bash
# Full path of the current script
THIS=`readlink -f "${BASH_SOURCE[0]}" 2>/dev/null||echo $0`
# The directory where current script resides
DIR=`dirname "${THIS}"`
# 'Dot' means 'source', i.e. 'include':
. "$DIR/compile.sh"
The method supports all of these:
Spaces in path
Links (via readlink)
${BASH_SOURCE[0]} is more robust than $0
SRC=$(cd $(dirname "$0"); pwd)
source "${SRC}/incl.sh"
1. Neatest
I explored almost every suggestion and here is the neatest one that worked for me:
script_root=$(dirname $(readlink -f $0))
It works even when the script is symlinked to a $PATH directory.
See it in action here: https://github.com/pendashteh/hcagent/blob/master/bin/hcagent
2. The coolest
# Copyright https://stackoverflow.com/a/13222994/257479
script_root=$(ls -l /proc/$$/fd | grep "255 ->" | sed -e 's/^.\+-> //')
This is actually from another answer on this very page, but I'm adding it to my answer too!
3. The most reliable
Alternatively, in the rare case that those didn't work, here is the bullet proof approach:
# Copyright http://stackoverflow.com/a/7400673/257479
myreadlink() { [ ! -h "$1" ] && echo "$1" || (local link="$(expr "$(command ls -ld -- "$1")" : '.*-> \(.*\)$')"; cd $(dirname $1); myreadlink "$link" | sed "s|^\([^/].*\)\$|$(dirname $1)/\1|"); }
whereis() { echo $1 | sed "s|^\([^/].*/.*\)|$(pwd)/\1|;s|^\([^/]*\)$|$(which -- $1)|;s|^$|$1|"; }
whereis_realpath() { local SCRIPT_PATH=$(whereis $1); myreadlink ${SCRIPT_PATH} | sed "s|^\([^/].*\)\$|$(dirname ${SCRIPT_PATH})/\1|"; }
script_root=$(dirname $(whereis_realpath "$0"))
You can see it in action in taskrunner source: https://github.com/pendashteh/taskrunner/blob/master/bin/taskrunner
Hope this help someone out there :)
Also, please leave it as a comment if one did not work for you and mention your operating system and emulator. Thanks!
This works even if the script is sourced:
source "$( dirname "${BASH_SOURCE[0]}" )/incl.sh"
You need to specify the location of the other scripts, there is no other way around it. I'd recommend a configurable variable at the top of your script:
#!/bin/bash
installpath=/where/your/scripts/are
. $installpath/incl.sh
echo "The main script"
Alternatively, you can insist that the user maintain an environment variable indicating where your program home is at, like PROG_HOME or somesuch. This can be supplied for the user automatically by creating a script with that information in /etc/profile.d/, which will be sourced every time a user logs in.
I'd suggest that you create a setenv script whose sole purpose is to provide locations for various components across your system.
All other scripts would then source this script so that all locations are common across all scripts using the setenv script.
This is very useful when running cronjobs. You get a minimal environment when running cron, but if you make all cron scripts first include the setenv script then you are able to control and synchronise the environment that you want the cronjobs to execute in.
We used such a technique on our build monkey that was used for continuous integration across a project of about 2,000 kSLOC.
Shell Script Loader is my solution for this.
It provides a function named include() that can be called many times in many scripts to refer a single script but will only load the script once. The function can accept complete paths or partial paths (script is searched in a search path). A similar function named load() is also provided that will load the scripts unconditionally.
It works for bash, ksh, pd ksh and zsh with optimized scripts for each one of them; and other shells that are generically compatible with the original sh like ash, dash, heirloom sh, etc., through a universal script that automatically optimizes its functions depending on the features the shell can provide.
[Fowarded example]
start.sh
This is an optional starter script. Placing the startup methods here is just a convenience and can be placed in the main script instead. This script is also not needed if the scripts are to be compiled.
#!/bin/sh
# load loader.sh
. loader.sh
# include directories to search path
loader_addpath /usr/lib/sh deps source
# load main script
load main.sh
main.sh
include a.sh
include b.sh
echo '---- main.sh ----'
# remove loader from shellspace since
# we no longer need it
loader_finish
# main procedures go from here
# ...
a.sh
include main.sh
include a.sh
include b.sh
echo '---- a.sh ----'
b.sh
include main.sh
include a.sh
include b.sh
echo '---- b.sh ----'
output:
---- b.sh ----
---- a.sh ----
---- main.sh ----
What's best is scripts based on it may also be compiled to form a single script with the available compiler.
Here's a project that uses it: http://sourceforge.net/p/playshell/code/ci/master/tree/. It can run portably with or without compiling the scripts. Compiling to produce a single script can also happen, and is helpful during installation.
I also created a simpler prototype for any conservative party that may want to have a brief idea of how an implementation script works: https://sourceforge.net/p/loader/code/ci/base/tree/loader-include-prototype.bash. It's small and anyone can just include the code in their main script if they want to if their code is intended to run with Bash 4.0 or newer, and it also doesn't use eval.
Steve's reply is definitely the correct technique but it should be refactored so that your installpath variable is in a separate environment script where all such declarations are made.
Then all scripts source that script and should installpath change, you only need to change it in one location. Makes things more, er, futureproof. God I hate that word! (-:
BTW You should really refer to the variable using ${installpath} when using it in the way shown in your example:
. ${installpath}/incl.sh
If the braces are left out, some shells will try and expand the variable "installpath/incl.sh"!
I put all my startup scripts in a .bashrc.d directory.
This is a common technique in such places as /etc/profile.d, etc.
while read file; do source "${file}"; done <<HERE
$(find ${HOME}/.bashrc.d -type f)
HERE
The problem with the solution using globbing...
for file in ${HOME}/.bashrc.d/*.sh; do source ${file};done
...is you might have a file list which is "too long".
An approach like...
find ${HOME}/.bashrc.d -type f | while read file; do source ${file}; done
...runs but doesn't change the environment as desired.
This should work reliably:
source_relative() {
local dir="${BASH_SOURCE%/*}"
[[ -z "$dir" ]] && dir="$PWD"
source "$dir/$1"
}
source_relative incl.sh
Using source or $0 will not give you the real path of your script. You could use the process id of the script to retrieve its real path
ls -l /proc/$$/fd |
grep "255 ->" |
sed -e 's/^.\+-> //'
I am using this script and it has always served me well :)
Of course, to each their own, but I think the block below is pretty solid. I believe this involves the "best" way to find a directory, and the "best" way to call another bash script:
scriptdir=`dirname "$BASH_SOURCE"`
source $scriptdir/incl.sh
echo "The main script"
So this may be the "best" way to include other scripts. This is based off another "best" answer that tells a bash script where it is stored
Personally put all libraries in a lib folder and use an import function to load them.
folder structure
script.sh contents
# Imports '.sh' files from 'lib' directory
function import()
{
local file="./lib/$1.sh"
local error="\e[31mError: \e[0mCannot find \e[1m$1\e[0m library at: \e[2m$file\e[0m"
if [ -f "$file" ]; then
source "$file"
if [ -z $IMPORTED ]; then
echo -e $error
exit 1
fi
else
echo -e $error
exit 1
fi
}
Note that this import function should be at the beginning of your script and then you can easily import your libraries like this:
import "utils"
import "requirements"
Add a single line at the top of each library (i.e. utils.sh):
IMPORTED="$BASH_SOURCE"
Now you have access to functions inside utils.sh and requirements.sh from script.sh
TODO: Write a linker to build a single sh file
we just need to find out the folder where our incl.sh and main.sh is stored; just change your main.sh with this:
main.sh
#!/bin/bash
SCRIPT_NAME=$(basename $0)
SCRIPT_DIR="$(echo $0| sed "s/$SCRIPT_NAME//g")"
source $SCRIPT_DIR/incl.sh
echo "The main script"
According man hier suitable place for script includes is /usr/local/lib/
/usr/local/lib
Files associated with locally installed programs.
Personally I prefer /usr/local/lib/bash/includes for includes.
There is bash-helper lib for including libs in that way:
#!/bin/bash
. /usr/local/lib/bash/includes/bash-helpers.sh
include api-client || exit 1 # include shared functions
include mysql-status/query-builder || exit 1 # include script functions
# include script functions with status message
include mysql-status/process-checker; status 'process-checker' $? || exit 1
include mysql-status/nonexists; status 'nonexists' $? || exit 1
Most of the answers I saw here seem to overcomplicate things. This method has always worked reliably for me:
FULLPATH=$(readlink -f $0)
INCPATH=${FULLPATH%/*}
INCPATH will hold the complete path of the script excluding the script filename, regardless of how the script is called (by $PATH, relative or absolute).
After that, one only needs to do this to include files in the same directory:
. $INCPATH/file_to_include.sh
Reference: TecPorto / Location independent includes
here is a nice function you can use. it builds on what #sacii made. thank you
it will let you list any number of space separated script names to source (relative to the script calling source_files).
optionally you can pass an absolute or relative path as the first argument and it will source from there instead.
you can call it multiple times (see example below) to source scripts from different dirs
#!/usr/bin/env bash
function source_files() {
local scripts_dir
scripts_dir="$1"
if [ -d "$scripts_dir" ]; then
shift
else
scripts_dir="${BASH_SOURCE%/*}"
if [[ ! -d "$scripts_dir" ]]; then scripts_dir="$PWD"; fi
fi
for script_name in "$#"; do
# shellcheck disable=SC1091 disable=SC1090
. "$scripts_dir/$script_name.sh"
done
}
here is an example you can run to show how its used
#!/usr/bin/env bash
function source_files() {
local scripts_dir
scripts_dir="$1"
if [ -d "$scripts_dir" ]; then
shift
else
scripts_dir="${BASH_SOURCE%/*}"
if [[ ! -d "$scripts_dir" ]]; then scripts_dir="$PWD"; fi
fi
for script_name in "$#"; do
# shellcheck disable=SC1091 disable=SC1090
. "$scripts_dir/$script_name.sh"
done
}
## -- EXAMPLE -- ##
# assumes dir structure:
# /
# source_files.sh
# sibling.sh
# scripts/
# child.sh
# nested/
# scripts/
# grandchild.sh
cd /tmp || exit 1
# sibling.sh
tee sibling.sh <<- EOF > /dev/null
#!/usr/bin/env bash
export SIBLING_VAR='sibling var value'
EOF
# scripts/child.sh
mkdir -p scripts
tee scripts/child.sh <<- EOF > /dev/null
#!/usr/bin/env bash
export CHILD_VAR='child var value'
EOF
# nested/scripts/grandchild.sh
mkdir -p nested/scripts
tee nested/scripts/grandchild.sh <<- EOF > /dev/null
#!/usr/bin/env bash
export GRANDCHILD_VAR='grandchild var value'
EOF
source_files 'sibling'
source_files 'scripts' 'child'
source_files 'nested/scripts' 'grandchild'
echo "$SIBLING_VAR"
echo "$CHILD_VAR"
echo "$GRANDCHILD_VAR"
rm sibling.sh
rm -rf scripts nested
cd - || exit 1
prints:
sibling var value
child var value
grandchild var value
You can also use:
PWD=$(pwd)
source "$PWD/inc.sh"