How can I prevent directory traversal attacks in a bash script, where arguments contain directory names?
Example:
$STAGE=$1
$APP=$2
deploy.sh dist/ /opt/apps/"$STAGE"/"$APP"
The $STAGE and $APP variables are set from outside. An attacker could change this to an arbitrary path with "..".
I know the usual solution is to compare the directory string with the result of a function that returns the absolute path. But I couldn't find a ready solution and don't want to come up with my own.
Something like this?
#! /bin/bash
STAGE=$1
APP=$2
expectedParentDir="/opt/apps/"
function testDir(){
arg=$1
if [[ ! -f $arg ]]
then
echo "File $arg does not exist."
exit 1
fi
rpath=$(realpath $arg)
if [[ $rpath != ${expectedParentDir}* ]]
then
echo "Please only reference files under $expectedParentDir directory."
exit 2
fi
}
testDir /opt/apps/"$STAGE"/"$APP"
... deploy ...
Example Call
test.sh "../../etc/" "passwd"
Please only reference files under /opt/apps/ directory.
------------
test.sh "../../etc/" "secret"
File /opt/apps/../../etc//secret does not exist.
Test existence of file with -f or use -d if target must be a directory
Use realpath to resolve path
Use == ${expectedParentDir}* to find out if resolved path starts with expected string
The script should be run as a user that only has permissions to access the necessary directories.
Related
I have a bash script containing a function which is sourced by a number of different bash scripts. This function may fail based on its input, and I'd like to create logging within the function to identify what script(s) are causing failures.
E.g.,
source /path/to/function.sh
The closest I've come is this:
ps --no-heading -ocmd -p $$
This works well enough if the full file path is used to run the parent script, returning:
/bin/bash /path/to/parent.sh
But it fails to provide the full path if the parent script is run from a relative path, returning:
/bin/bash ./parent.sh
Ideally, I'd like a way to reliably return the parent script file path for both cases.
I suppose I could have each parent script pass its file path to the function (via $0 or similar), but that seems hard to enforce and not terribly elegant.
Any ideas, or alternative approaches? Should I not worry about the relative path case, and just use full/absolute file paths for everything?
Thanks!
I'm using Centos 5.9.
Bash version -
GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)
You can use readlink to follow all symbolic links to get an absolute path.
echo $(readlink -f $0)
As soon as the parent script starts export
"`pwd`/$0"
or so, into an env variable, say ORIG_SCRIPT, then in the function just use ORIG_SCRIPT.
You need to do this as soon as the script starts because $0 may be relative to the PWD and if you later change PWD before you need the value of ORIG_SCRIPT, it gets unnecessarily complicated.
Update:
Since you know the pid by $$, you may get something from /proc/<PID>/cmdline but I don't know how exactly this one works right now.
You could use ${BASH_SOURCE[1]} to get the script that calls the function but that is not always on absolute path form. You could get the absolute path of it by readlink -m, realpath, or other shell-script based solutions, but if your script changes directory from time to time, conversion of relative paths to absolute paths would no longer be accurate as those tools base from the current directory to get the actual form.
There's a workaround however but this requires that you won't change directories in your scripts before calling (sourcing) the script that contains the function. You would have to save the current directory in that script itself then base forming of absolute paths through that directory. You are free to change directories after the script has already been included. As an example:
ORIGINAL_PWD=$PWD
function x {
local CALLING_SCRIPT="${BASH_SOURCE[1]}"
if [[ -n $CALLING_SCRIPT ]]; then
if [[ $CALLING_SCRIPT == /* ]]; then
CALLING_SCRIPT=$(readlink -m "$CALLING_SCRIPT")
else
CALLING_SCRIPT=$(readlink -m "$ORIGINAL_PWD/$CALLING_SCRIPT")
fi
echo "Calling script: $CALLING_SCRIPT"
else
echo "Caller is not a script."
fi
}
Or
ORIGINAL_PWD=$PWD
function getabspath {
local -a T1 T2
local -i I=0
local IFS=/ A
case "$1" in
/*)
read -r -a T1 <<< "$1"
;;
*)
read -r -a T1 <<< "/$PWD/$1"
;;
esac
T2=()
for A in "${T1[#]}"; do
case "$A" in
..)
[[ I -ne 0 ]] && unset T2\[--I\]
continue
;;
.|'')
continue
;;
esac
T2[I++]=$A
done
case "$1" in
*/)
[[ I -ne 0 ]] && __="/${T2[*]}/" || __=/
;;
*)
[[ I -ne 0 ]] && __="/${T2[*]}" || __=/.
;;
esac
}
function x {
local CALLING_SCRIPT="${BASH_SOURCE[1]}"
if [[ -n $CALLING_SCRIPT ]]; then
if [[ $CALLING_SCRIPT == /* ]]; then
getabspath "$CALLING_SCRIPT"
else
getabspath "$ORIGINAL_PWD/$CALLING_SCRIPT"
fi
echo "Calling script: $__"
else
echo "Caller is not a script."
fi
}
You could also play around with FUNCNAME and BASH_LINENO to be more specific with the errors. I'm just not sure if they're already supported in Bash 3.2.
If you actually had Bash 4.0+ you could make use of associative arrays to map absolute paths with it but if there are two scripts with the same names or are called with almost similar names, one value could be overridden. There's no fix to that since we can't choose our keys from BASH_SOURCE.
Added Note: You could also prevent your script from being unnecessarily sourced multiple times as it only requires to be once through a solution like Shell Script Loader. You might find convenience through it as well.
I got a recursive script which iterates a list of names, some of which are files and some are directories.
If it's a (non-empty) directory, I should call the script again with all of the files in the directory and check if they are legal.
The part of the code making the recursive call:
if [[ -d $var ]] ; then
if [ "$(ls -A $var)" ]; then
./validate `ls $var`
fi
fi
The part of code checking if the files are legal:
if [[ -f $var ]]; then
some code
fi
But, after making the recursive calls, I can no longer check any of the files inside that directory, because they are not in the same directory as the main script, the -f $var if cannot see them.
Any suggestion how can I still see them and use them?
Why not use find? Simple and easy solution to the problem.
Always quote variables, you never known when you will find a file or directory name with spaces
shopt -s nullglob
if [[ -d "$path" ]] ; then
contents=( "$path"/* )
if (( ${#contents[#]} > 0 )); then
"$0" "${contents[#]}"
fi
fi
you're re-inventing find
of course, var is a lousy variable name
if you're recursively calling the script, you don't need to hard-code the script name.
you should consider putting the logic into a function in the script, and the function can recursively call itself, instead of having to spawn an new process to invoke the shell script each time. If you do this, use $FUNCNAME instead of "$0"
A few people have mentioned how find might solve this problem, I just wanted to show how that might be done:
find /yourdirectory -type f -exec ./validate {} +;
This will find all regular files in yourdirectory and recursively in all its sub-directories, and return their paths as arguments to ./validate. The {} is expanded to the paths of the files that find locates within yourdirectory. The + at the end means that each call to validate will be on a large number of files, instead of calling it individually on each file (wherein the + is replaced with a \), this provides a huge speedup sometimes.
One option is to change directory (carefully) into the sub-directory:
if [[ -d "$var" ]] ; then
if [ "$(ls -A $var)" ]; then
(cd "$var"; exec ./validate $(ls))
fi
fi
The outer parentheses start a new shell so the cd command does not affect the main shell. The exec replaces the original shell with (a new copy of) the validate script. Using $(...) instead of back-ticks is sensible. In general, it is sensible to enclose variable names in double quotes when they refer to file names that might contain spaces (but see below). The $(ls) will list the files in the directory.
Heaven help you with the ls commands if any file names or directory names contain spaces; you should probably be using * glob expansion instead. Note that a directory containing a single file with a name such as -n would trigger a syntax error in your script.
Corrigendum
As Jens noted in a comment, the location of the shell script (validate) has to be adjusted as you descend the directory hierarchy. The simplest mechanism is to have the script on your PATH, so you can write exec validate or even exec $0 instead of exec ./validate. Failing that, you need to adjust the value of $0 — assuming your shell leaves $0 as a relative path and doesn't mess around with converting it to an absolute path. So, a revised version of the code fragment might be:
# For validate on PATH or absolute name in $0
if [[ -d "$var" ]] ; then
if [ "$(ls -A $var)" ]; then
(cd "$var"; exec $0 $(ls))
fi
fi
or:
# For validate not on PATH and relative name in $0
if [[ -d "$var" ]] ; then
if [ "$(ls -A $var)" ]; then
(cd "$var"; exec ../$0 $(ls))
fi
fi
The following shell script changes current the directory to the desktop.
v=~/Desktop/
cd $v
pwd # desktop
The following script changes the current directory to home directory instead of generating error.
cd $undefined_variable
pwd # home directory
echo $? # 0
I'm afraid that the script will remove important files if I misspelled a variable for new current directory.
Generally, how do you safely change current directory with variable in shell script?
Use:
cd ${variable:?}
if $variable is not defined or empty then bash will throw an error and exit. It's like the set -u option but not global through the file.
You can set -u to make bash exit with an error each time you expand an undefined variable.
You could use the test -d condition (checks whether the specified variable is a directory), i.e.
if [[ -d $undefined_variable ]]
then
cd $undefined_variable
echo "This will not be printed if $undefined_variable is not defined"
fi
See also here for further test options...
The Bourne Shells have a construct to substitute a value for undefined variables, ${varname-subtitution}. You can use this to have a safe fallback directory in case the variable is undefined:
cd "${undefined-/tmp/backupdir}"
If there is a variable named undefined, its value is substituted, otherwise /tmp/backupdir is substituted.
Note that I also put the variable expansion in double quotes. This is used to prevent word splitting on strings containing spaces (very common for Windows directories). This way it works even for directories with spaces.
For the gory details on all the shell substitution constructs (there are seven more for POSIX shells), read your shell manual's Parameter Substitution section.
You have to write a wrapper (this work in bash):
cd() {
if [ $# -ne 1 ] ;then
echo "cd need exactly 1 argument" >&2
return 2
fi
builtin cd "$1"
}
yes, that's shell
if you type cd without parameter it will jump to home dir.
You can can check the variable of null or empty before you cd command.
check like (cd only be called if targetDir is not empty):
test -z "$targetDir" || cd $targetDir
check like (cd only be called if targetDir really exist):
test -d "$targetDir" && cd $targetDir
Note: Thanks for -1, should read the last sentence too. So I added the real answer.
So, what I'm trying to do is if the user didn't pass a path as argument to script, the script shall use the current directory. If a path is passed use it instead.
instdir="$(pwd)/"
if [ -n "$1" ] ; then
instdir="$1"
fi
cd $instdir
Errors
./script.sh /path/to/a\ folder/
outputs: cd: /path/to/a: File or folder not found
./script.sh "/path/to/a\ folder/"
outputs: cd: /path/to/a\: File or folder not found
What am I doing wrong here?
Changing cd $instdir to cd "$instdir" should fix that particular problem. Without the quotes, the a and folder parts of a folder are treated as separate parameters.
Note, instead of the three-line if statement to set instdir, write:
[ "$1" ] && instdir="$1"
If you pass a path with spaces in it as an argument, it will cause problems. If not now, then in the future. I'd suggest you do the following (provided the path is the only argument):
instdir="$(pwd)"
if [[ -d "$#" ]]; then
instdir="$#"
fi
cd "$instdir"
I am attempting to write a bash script that changes directory and then runs an existing script in the new working directory.
This is what I have so far:
#!/bin/bash
cd /path/to/a/folder
./scriptname
scriptname is an executable file that exists in /path/to/a/folder - and (needless to say), I do have permission to run that script.
However, when I run this mind numbingly simple script (above), I get the response:
scriptname: No such file or directory
What am I missing?! the commands work as expected when entered at the CLI, so I am at a loss to explain the error message. How do I fix this?
Looking at your script makes me think that the script you want to launch a script which is locate in the initial directory. Since you change you directory before executing it won't work.
I suggest the following modified script:
#!/bin/bash
SCRIPT_DIR=$PWD
cd /path/to/a/folder
$SCRIPT_DIR/scriptname
cd /path/to/a/folder
pwd
ls
./scriptname
which'll show you what it thinks it's doing.
I usually have something like this in my useful script directory:
#!/bin/bash
# Provide usage information if not arguments were supplied
if [[ "$#" -le 0 ]]; then
echo "Usage: $0 <executable> [<argument>...]" >&2
exit 1
fi
# Get the executable by removing the last slash and anything before it
X="${1##*/}"
# Get the directory by removing the executable name
D="${1%$X}"
# Check if the directory exists
if [[ -d "$D" ]]; then
# If it does, cd into it
cd "$D"
else
if [[ "$D" ]]; then
# Complain if a directory was specified, but does not exist
echo "Directory '$D' does not exist" >&2
exit 1
fi
fi
# Check if the executable is, well, executable
if [[ -x "$X" ]]; then
# Run the executable in its directory with the supplied arguments
exec ./"$X" "${#:2}"
else
# Complain if the executable is not a valid
echo "Executable '$X' does not exist in '$D'" >&2
exit 1
fi
Usage:
$ cdexec
Usage: /home/archon/bin/cdexec <executable> [<argument>...]
$ cdexec /bin/ls ls
ls
$ cdexec /bin/xxx/ls ls
Directory '/bin/xxx/' does not exist
$ cdexec /ls ls
Executable 'ls' does not exist in '/'
One source of such error messages under those conditions is a broken symlink.
However, you say the script works when run from the command line. I would also check to see whether the directory is a symlink that's doing something other than what you expect.
Does it work if you call it in your script with the full path instead of using cd?
#!/bin/bash
/path/to/a/folder/scriptname
What about when called that way from the command line?