Bash: How to test for failure of mkdir command? - bash

I'm writing a bash script and want to do robust error checking in it.
The exit status code for mv to make it fail is easy to simulate a failure. All you have to do is move a file that doesn't exist, and it fails.
However with mkdir I want to simulate it failing. mkdir could fail for any number of reasons, problems with the disk, or lack of permissions, but not sure how to simulate a failure.

Just use
mkdir your_directory/
if [ $? -ne 0 ] ; then
echo "fatal"
else
echo "success"
fi
where $? stands for the exit code from the last command executed.
To create parent directories, when these don't exist, run mkdir -p parent_directory/your_directory/

if ! mkdir your_directory 2>/dev/null; then
print_error
exit
fi
or
mkdir your_directory 2>/dev/null || { print_error; exit; }

mkdir will fail if the directory already exists (unless you are using -p), and return an error code of 1 (on my system), so create the directory first to test this on your own system. (Although I would assume that is standard across all shells.)
Alternatively, make the parent directory read-only.

in your script , you could also put a check for the new dir ....
mkdir -p new_dir ;
if [ -d new_dir ]
cd new_dir && ...... anything else you want .
else
echo "error in directory creation ";
exit 2 ;
fi

If you are lazy a simple set -e in the beginning of you script is enough. Often you just want to print an error and then terminate if something goes wrong.
Not exactly what you asked for, but perhaps what you want.

Related

Chaining OR and AND in Bash

I would like to chain the OR and AND commands so that I print the contents of a directory to stdout if the directory exists, or in the case that it does not, print a message to stdout saying that the directory "$MY_DIR" is being created and then create it.
I have the following code.
ls "$MY_DIR" || echo "Creating $MY_DIR" && mkdir -p "$MY_DIR"
Is this the correct and canonical way to do this? Will mkdir always run since echo will return 0 return status, even in the case that ls does return?
The most relevant question I have located so far is this one which does not eliminate my doubts.
I would not do what you are doing that way in any case. You should group your commands for clarity if for no other reason.
ls "$MY_DIR" || {
echo "Creating $MY_DIR"
mkdir -p "$MY_DIR"
}
This has several advantages: your intent is more clearly expressed, there is less ambiguity between what the human thinks will happen and what the computer will do, and it stops relying on the exit code from echo that you were not really interested in to begin with. Even if your original version worked entirely correctly it was more vulnerable to later, naïve modification.
A oneliner form is of course possible, if less readable:
ls "$MY_DIR" || { echo "Creating $MY_DIR"; mkdir -p "$MY_DIR"; }
As for your original method, consider this:
If the ls command fails:
false || true && echo mkdir # prints mkdir
But if the ls command succeeds
true || true && echo mkdir # also prints mkdir
Whereas
true || { true; echo mkdir; } # does not print
false || { true; echo mkdir; } # prints mkdir
It gets worse: I am not entirely clear whether ls will set an unsuccessful return code if the file/directory does not exist. It's certainly true that GNU ls does this, and it may be common, but the standard doesn't seem to say what constitutes success or failure, so implementations may well disagree.
Why does it have to be chaining, with all the potential problems that might bring? Why not express your intentions clearly?
Also, UPPERCASE is usually reserved for environment variables.
# -d checks for directory specifically, use -e for existence.
# Thanks to #sorpigal for pointing it out.
if [[ -d $my_dir ]]
then
ls "$my_dir"
else
echo "Creating $my_dir"
mkdir -p "$my_dir"
fi
Or, if it has to be one line...
if [[ -d $my_dir ]]; then ls "$my_dir"; else echo "Creating $my_dir"; mkdir -p "$my_dir"; fi

Understanding Bash if statement that invokes a command

Does anyone know what this is doing:
if ! /fgallery/fgallery -v -j3 /images /usr/share/nginx/html/ "${GALLERY_TITLE:-Gallery}"; then
mkdir -p /usr/share/nginx/html
I understand the first part is saying if /fgallery/fgallery directory doesn't exist but after this it it not clear for me.
In Bash, we can build an if based on the exit status of a command this way:
if command; then
echo "Command succeeded"
else
echo "Command failed"
fi
then part is executed when the command exits with 0 and else part otherwise.
Your code is doing exactly that.
It can be rewritten as:
/fgallery/fgallery -v -j3 /images /usr/share/nginx/html/ "${GALLERY_TITLE:-Gallery}"; fgallery_status=$?
if [ "$fgallery_status" -ne 0 ]; then
mkdir -p /usr/share/nginx/html
fi
But the former construct is more elegant and less error prone.
See these posts:
How to conditionally do something if a command succeeded or failed
Why is testing "$?" to see if a command succeeded or not, an antipattern?

Check for existance of directory always fails in Bash Script

I have a problem that has been bugging me for a few hours now. I have created a parameter --file-dir using getopt, which assigns a directory for the program to use. Following the parameter, the user has the choice to choose whatever directory they please. To keep the program stable, I check to see whether that directory even exists. The following code is what I have currently and it always returns "Directory does not exist. Terminating." even when I search for my /home directory.
-a|--file-dir) FILE_DIR=$2 ;
if [ ! -d "$FILE_DIR" ]; then
echo "Directory does not exist. Terminating." ;
exit 1;
else
echo "Directory exists." ;
fi ;
shift;;
Any input is much appreciated. The getopt's work fine with echo tests and such but fail when checking for directories.
It would be a good idea to check if you're really having the right argument for it:
-a|--file-dir) FILE_DIR=$2 ;
if [ ! -d "$FILE_DIR" ]; then
echo "Directory \"$FILE_DIR\" does not exist. Terminating." ;
exit 1;
else
echo "Directory exists." ;
fi ;
shift;;
If not certainly the problem is not in the checker but somewhere in your argument-parsing loop.
I had an issue with the same behavior: checking for a directory in the command line worked as expected, but always failed when done in a script.
I was running this script under git bash for Windows:
while read -r i; do
[ ! -d "$i" ] && echo "No $i"
done < "$1"
Windows' line endings (\r\n) can cause issues when splitting lines. Each test actually checks for directory\r instead of directory. Therefore, I needed to run the read command with the correct delimiter:
while IFS=$'\r\n' read -r i; do
It is possible that OP also had a similar issue, where non-printable characters got in the way.

Prompt for `sudo` only if Bash script runs into "Permission denied"

Let's say I have a very simple script which creates a link in a certain directory, and kills the script if it fails.
ln -s "/opt/myapp" "${1}/link" || exit 1;
Right now it just quits if it runs into errors. I want to change it so only if it runs into permission errors when creating the link, it will execute the following lines instead of exiting:
echo "The target directory requires root privileges to access."
sudo ln -s "/opt/myapp" "${1}/myapp" || exit 1;
I don't want to prompt the users to run as root unless they absolutely have to.
ln seems to retun exit code 1 on failure regardless of whether it was a problem with permissions or any other errors such as a directory not existing, so I can't use that to detect which problem it ran into.
And if I instead store search through the output of ln for the string "Permission denied", I'm assuming it will fail on non-english operating systems.
I don't know of any ways to categorize ln exit reasons, or at least any documentation about specific exit codes you could test with $?, but you can test for relevant permissions with the standard test or [ command:
SOURCEFILE="/opt/myapp"
DESTDIR="${1}"
DESTTARGET="${DESTDIR}/myapp"
if [ ! -d "$DESTDIR" -o ! -e "$SOURCEFILE" ]; then
echo "Source file does not exist or destination directory does not exist." >&2
elif [ ! -r "$SOURCEFILE" -o ! -w "$DESTDIR" ]; then
echo "Source file is not readable or destination directory is not writable." >&2
# Run sudo command here
else
# Should work, run command here
fi

Quick bash script to run a script in a specified folder?

I am attempting to write a bash script that changes directory and then runs an existing script in the new working directory.
This is what I have so far:
#!/bin/bash
cd /path/to/a/folder
./scriptname
scriptname is an executable file that exists in /path/to/a/folder - and (needless to say), I do have permission to run that script.
However, when I run this mind numbingly simple script (above), I get the response:
scriptname: No such file or directory
What am I missing?! the commands work as expected when entered at the CLI, so I am at a loss to explain the error message. How do I fix this?
Looking at your script makes me think that the script you want to launch a script which is locate in the initial directory. Since you change you directory before executing it won't work.
I suggest the following modified script:
#!/bin/bash
SCRIPT_DIR=$PWD
cd /path/to/a/folder
$SCRIPT_DIR/scriptname
cd /path/to/a/folder
pwd
ls
./scriptname
which'll show you what it thinks it's doing.
I usually have something like this in my useful script directory:
#!/bin/bash
# Provide usage information if not arguments were supplied
if [[ "$#" -le 0 ]]; then
echo "Usage: $0 <executable> [<argument>...]" >&2
exit 1
fi
# Get the executable by removing the last slash and anything before it
X="${1##*/}"
# Get the directory by removing the executable name
D="${1%$X}"
# Check if the directory exists
if [[ -d "$D" ]]; then
# If it does, cd into it
cd "$D"
else
if [[ "$D" ]]; then
# Complain if a directory was specified, but does not exist
echo "Directory '$D' does not exist" >&2
exit 1
fi
fi
# Check if the executable is, well, executable
if [[ -x "$X" ]]; then
# Run the executable in its directory with the supplied arguments
exec ./"$X" "${#:2}"
else
# Complain if the executable is not a valid
echo "Executable '$X' does not exist in '$D'" >&2
exit 1
fi
Usage:
$ cdexec
Usage: /home/archon/bin/cdexec <executable> [<argument>...]
$ cdexec /bin/ls ls
ls
$ cdexec /bin/xxx/ls ls
Directory '/bin/xxx/' does not exist
$ cdexec /ls ls
Executable 'ls' does not exist in '/'
One source of such error messages under those conditions is a broken symlink.
However, you say the script works when run from the command line. I would also check to see whether the directory is a symlink that's doing something other than what you expect.
Does it work if you call it in your script with the full path instead of using cd?
#!/bin/bash
/path/to/a/folder/scriptname
What about when called that way from the command line?

Resources