Deleting a directory contents using shell scripts - bash

I am a newbie to Shell scripting. I want to delete all the contents of a directory which is in HOME directory of the user and deleting some files which are matching with my conditions. After googled for some time, i have created the following script.
#!/bin/bash
#!/sbin/fuser
PATH="$HOME/di"
echo "$PATH";
if [ -d $PATH ]
then
rm -r $PATH/*
fuser -kavf $PATH/.n*
rm -rf $PATH/.store
echo 'File deleted successfully :)'
fi
If I run the script, i am getting error as follows,
/users/dinesh/di
dinesh: line 11: rm: command not found
dinesh: line 12: fuser: command not found
dinesh: line 13: rm: command not found
File deleted successfully :)
Can anybody help me with this?
Thanks in advance.

You are modifying PATH variable, which is used by the OS defines the path to find the utilities (so that you can invoke it without having to type the full path to the binary). The system cannot find rm and fuser in the folders currently specified by PATH (since you overwritten it with the directory to be deleted), so it prints the error.
tl;dr DO NOT use PATH as your own variable name.

PATH is a special variable that controls where the system looks for command executables (like rm, fuser, etc). When you set it to /users/dinesh/di, it then looks there for all subsequent commands, and (of course) can't find them. Solution: use a different variable name. Actually, I'd recommend using lowercase variables in shell scripts -- there are a number of uppercase reserved variable names, and if you try to use any of them you're going to have trouble. Sticking to lowercase is an easy way to avoid this.
BTW, in general it's best to enclose variables in double-quotes whenever you use them, to avoid trouble with some parsing the shell does after replacing them. For example, use [ -d "$path" ] instead of [ -d $path ]. $path/* is a bit more complicated, since the * won't work inside quotes. Solution: rm -r "$path"/*.
Random other notes: the #!/sbin/fuser line isn't doing anything. Only the first line of the script can act as a shebang. Also, don't bother putting ; at the end of lines in shell scripts.
#!/bin/bash
path="$HOME/di"
echo "$path"
if [ -d "$path" ]
then
rm -r "$path"/*
fuser -kavf "$path"/.n*
rm -rf "$path/.store"
echo 'File deleted successfully :)'
fi

This line:
PATH="$HOME/di"
removes all the standard directories from your PATH (so commands such as rm that are normally found in /bin or /usr/bin are 'missing'). You should write:
PATH="$HOME/di:$PATH"
This keeps what was already in $PATH, but puts $HOME/di ahead of that. It means that if you have a custom command in that directory, it will be invoked instead of the standard one in /usr/bin or wherever.
If your intention is to remove the directory $HOME/di, then you should not be using $PATH as your variable. You could use $path; variable names are case sensitive. Or you could use $dir or any of a myriad other names. You do need to be aware of the key environment variables and avoid clobbering or misusing them. Of the key environment variables, $PATH is one of the most key ($HOME is another; actually, after those two, most of the rest are relatively less important). Conventionally, upper case names are reserved for environment variables; use lower case names for local variables in a script.

Related

Trying to run a function in the Bash shell gives unexpected results

I have been trying to batch convert a bunch of really old MS office files to odf formats for archival purposes, using libreoffice from the command line. For this purpose I first gather all the files in a single directory and then invoke the following command (for doc files) within said directory:
/path/to/soffice --headless --convert-to odt *doc
This works well, and the command results in all doc files within the directory being converted in one go. I want to however avoid having to always type out the path to soffice with the necessary parameters, so I added the following to my Bash profile:
alias libreconv='function _libreconv(){ /path/to/soffice --headless --convert-to "$1" "$2"; }; _libreconv'
However, when I now try to invoke the following:
libreconv odt *doc
this results in only the first doc file in the directory being converted, after which the the function exits and returns me to prompt... Maybe I am missing something obvious (I am a cli newb after all), but I do not understand why invoking the function results in only the first file being converted versus all files when I run the soffice command directly.
Thanks in advance for any aid helping me understand what is going wrong here. :)
Because your function only accepts two parameters.
Probably don't hardcode the path to soffice; instead, make sure your PATH includes the directory where it's installed.
The alias is completely useless here anyway; see also Why would I create an alias which creates a function?
If you wanted to create a function, try something like
libreconv () { soffice --headless --convert-to "$#"; }
The arguments "$1" and "$2" literally expand to the first two arguments. The argument "$#" expands to all the arguments, with quoting preserved (this is important if you want to handle file names with spaces in them etc; you see many scripts which incorrectly use "$*" or $# without the quotes).
Tangentially, if soffice is in a weird place which you don't want in your PATH, add a symlink to it in a directory which is in your PATH. A common arrangement is to have ~/bin and populate it with symlinks to odd binaries, including perhaps scripts of your own which are installed for development in a Git working directory somewhere.
A common incantation to have in your .bash_profile or similar is
if [[ -d ~/bin ]]; then
case :$PATH: in
*:~/bin:* | *:$HOME/bin:* ) ;;
*) PATH=~/bin:$PATH;;
esac
fi
With that, you can (create ~/bin if it doesn't exist; mkdir ~/bin) and ln -s /path/to/soffice ~/bin to create a symlink to the real location.

Execute program with relative path and 'PATH' environment in Bash shell

Bash Environment
Given a very simple disk structure as below
And environment path variable set to dir1 and dir2 as below
$ env|grep PATH
returns :-
PATH=/:/usr/bin:/e/path/to/directory/dir1:/e/path/to/directory/dir2
execution of the program fails as below
$ bin/prog.exe
bash: bin/prog1.exe: No such file or directory
or also
$ /bin/prog1.exe
bash: /bin/prog.exe: No such file or directory
however if we modify path to include /bin
PATH=/:/usr/bin:/e/path/to/directory/dir1/bin:/e/path/to/directory/dir2/bin
it does of course work
$ prog1.exe
Hello from prog1 ...
My question is how do I make paths relative to 'environment' PATH work in bash?
In practise I am given some files that have 10's of relative paths generated to many different virtual root locations, to which I cant change.
It is also not possible to use a full path, or just the executable name of (which we know works) for this scenario.
See man bash for explanation (emphasis mine):
If the name is neither a shell function nor a builtin, and contains no slashes, bash searches each element of the PATH for a directory containing an executable file by that name.
There's no such thing as relative path lookup. If your command name contains any / characters, it is treated as a path relative to your current working directory only. If it has no / characters, then the shell will look only in the exact directories, not any subdirectories under them, listed in your PATH.
Relative path lookups would raise a host of issues related to order in which subdirectories should be searched.
As #choroba implies, you can't do what you're asking to do.
If you need to find a program in a subdirectory of one of the entries in your PATH, you'll have to iterate until you find it:
rel_path="bin/prog.exe"
IFS=: read -ra paths <<<"$PATH"
for path in "${paths[#]}"; do
if [[ -x "$path/$rel_path" ]]; then
exe="$path/$rel_path"
break
fi
done
if [[ -z "$exe" ]]; then
echo "cannot find $rel_path"
else
echo "found $rel_path as $exe"
fi

bash variable filename with underscore unrecognized?

I am writing a script that will symlink all of my dotfiles related to bash into the home directory. I want the script to check if the filenames already exists, so that I I can rename/move them.
For some reason can't get my if test-command to recognize filenames that have an underscore in them.
When testing for files that already exist, this script:
#!/bin/bash
for name in bashrc bash_profile bash_aliases
do
filename=$HOME"/."$name
if [ -e "$filename" ]; then
echo "${filename} exists"
else
echo "${filename} doesn't exist"
fi
done
Outputs:
/home/xavier/.bashrc exists
/home/xavier/.bash_profile doesn't exist
/home/xavier/.bash_aliases doesn't exist
What is it about the underscore that is causing this behavior, and how do I fix it?
The code is correct as posted and underscore is not a problematic character in general.
You mention that you're symlinking the files -- if you're sure the files are there, verify that they are not broken symlinks. -e file will be false if the final target of the link doesn't exist.
Other things that can cause this are:
lacking permissions
invisible unicode characters like a zero-width space
similar-looking unicode characters like bаsh_profile which has a fullwidth low line instead of an underscore.
running the script in a chroot or sandbox
checking that the file exists in a different terminal than the one used for running the script -- it could be chrooted, SSH'd to another machine or started before a directory was mounted over the dir, and therefore have a different view of the fs

Parent directory of a script

So I am a rookie in Linux and I need some help. I have to write a bash script in which I have to use the parent directory of the script to create a file there, wherever the script would be. It should look like this:
If my script it's in "/home/student/", I need to create, using an in-script command another file called txt in /home/. Any ideas please? Thank you.
There's a subtlety if you want to be able to run your script from anywhere.
eg: if your script is in /home/myHome/someDir/someOther, and you want to create a file in /home/myHome/someDir wherever you are when you run your script.
To solve it, you just need to first derive the directory where your script is.
It can be done using:
SCRIPT_DIRECTORY="$(dirname "$0")"
touch "$SCRIPT_DIRECTORY/../myFile.txt"
Edit: Actually it can be even more subtle, if you want to handle symlinks. ie: if the symlink /home/myHome/mySymlink points at your script, and is the one actually being called, then the previous script will consider /home/myHome/ instead of /home/myHome/someDir/someOther
To handle this case you can do
if [ -L "$0" ] && [ -x $(which readlink) ]; then
ACTUAL_SCRIPT_FILE="$(readlink -mn "$0")"
else
ACTUAL_SCRIPT_FILE="$0"
fi
SCRIPT_DIRECTORY="$(dirname "$ACTUAL_SCRIPT_FILE")"
touch "$SCRIPT_DIRECTORY/../myFile.txt"
use .. to point to parent directory. So you could create a file using something like
MY_SCRIPTDIR="$(dirname $0)"
touch ${MY_SCRIPTDIR}/../abc.txt
From your command prompt or within shell script.
Unfortunately, the other answers either give you the current working directory instead of the directory the script is in, or they will not work if either the script or one of the directories along the way is a symbolic link rather than a real directory.
What will work is:
dirname $(readlink -f "$0")
Explanation:
"$0" is the name of the script as you type it in your command line. Quoting is important for the case it contains whitespace.
readlink will resolve any symbolic links along the way
dirname takes just the directory name from script's full path - it's better readable and safer for corner cases than manually looking for slashes etc.
Now, you will get the correct result even in a complex case: if your script is in /tmp and you create a symbolic link to it in /tmp/abc/, and your current directory will be /home and you run /tmp/abc/your-script, it will correctly output /tmp, not /home nor /tmp/abc.

bash script doesn't find mkdir [duplicate]

This question already has answers here:
Getting "command not found" error in bash script
(6 answers)
Closed 2 years ago.
I've created a simple script to check if a folder exists and if not to create it. The script that follow
#!/bin/bash
PATH=~/Dropbox/Web_Development/
FOLDER=Test
if [ ! -d $PATH$FOLDER ]
then
echo $PATH$FOLDER 'not exists'
/bin/mkdir $PATH$FOLDER
echo $PATH$FOLDER 'has been created'
fi
works only if the mkdir command is preceded by /bin/. Failing in that, bash env output the error message "command cannot be found".
I though this could have been related to the system $PATH variable, but it looks regular (to me) and the output is as following:
/Library/Frameworks/Python.framework/Versions/2.7/bin:/bin:/usr/local/bin:/usr/bin:/sbin:/usr/local/sbin:/usr/sbin
I'm not sure whether the order with the different bin folders have been listed make any difference, but the /bin one (where the mkdir on my OSX Maverick) seems to reside is there hence I would expect bash to being able to execute this.
In fact, if I call the bash command from terminal, by typing just mkdir bash output the help string to suggest me how the mkdir command should be used. This suggests me that at a first instance bash is able to recognise the $PATH variable.
So what could be the cause? Is there any relation between the opening statement at the top of my .sh - #!/bin/bash - file and the "default" folder?
Thanks
Yeah, sometimes it is a bad idea to use capital letters for constant variables, because there are some default ones using the same convention. You can see some of the default variables here (Scroll to Special Parameters and Variables section). So it is better to use long names if you don't want to get any clashes.
Another thing to note is that you're trying to replicate mkdir -p functionality, which creates a folder if it does not exist (also it does create all of the parents, which is what you need in most cases)
One more thing - you always have to quote variables, otherwise they get expanded. This may lead to some serious problems. Imagine that
fileToRemove='*'
rm $fileToRemove
This code will remove all files in the current folder, not a file named * as you might expect.
One more thing, you should separate path from a folder with /. Like this "$MY_PATH/$MY_FOLDER". That should be done in case you forget to include / character in your path variable. It does not hurt to have two slashes, that means that /home/////////user/// folder is exactly the same /home/user/ folder.
Sometimes it is tricky to get ~ working, so using $HOME is a bit safer and more readable anyway.
So here is your modified script:
#!/bin/bash
MY_PATH="$HOME/Dropbox/Web_Development/"
MY_FOLDER='Test'
mkdir -p "$MY_PATH/$MY_FOLDER"
The problem is that your script sets PATH to a single directory, and that single directory does not contain a program called mkdir.
Do not use PATH as the name of a variable (use it to list the directories to be searched for commands).
Do learn the list of standard environment variable names and those specific to the shell you use (e.g. bash shell variables). Or use a simple heuristic: reserved names are in upper-case, so use lower-case names for variables local to a script. (Most environment variables are in upper-case — standard or not standard.)
And you can simply ensure that the directory exists by using:
mkdir -p ~/Dropbox/Web_Development
If it already exists, no harm is done. If it does not exist, it is created, and any other directories needed on the path to the directory (eg ~/Dropbox) is also created if that is missing.

Resources