Issue with setting $PATH directories - macos

For some strange reason, I'm getting a "No such file or directory" error for my $PATH variable. I have tried to edit my path using export, changing it from what it was originally to every permutation from a single directory path to the original.
When there is one directory (e.g., export PATH=/bin), I get "/bin: Is a Directory". But once I add more than one directory (e.g., export PATH=/bin:/sbin), I get "No such file or directory".
I'm curious to see what the cause of this issue is!

RE; your comment:
/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/local/git/bin:/u‌sr/local/mysql/bin: No such file or directory will be generated if you have a line which says:
$PATH
maybe on its own, or maybe you have $PATH=.... That is, the shell is trying to execute a program named:
/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/local/git/bin:/u‌sr/local/mysql/bin
Lose the $ on the left-hand side.

I'm not sure you are using the export variant. You almost certainly have spaces in there and you shouldn't, as per the following transcript:
pax> PATH= /bin
bash: /bin: is a directory
pax> PATH= /bin/sbin
bash: /bin/sbin: No such file or directory
The first is caused because you're setting the path temporarily to an empty string while attempting to run that directory. That's because you can do things like:
pax> xyzzy=1
pax> echo $xyzzy
1
pax> xyzzy=2 bash -c 'echo $xyzzy'
2
pax> echo $xyzzy
1
In other words, it's a way of changing an environment variable for a single command, and having it automatically revert when the command is finished.
The second case is simply because there is no /bin/sbin directory. So it detects that before it complains about the fact that you're trying to run a directory.
Setting a variable in bash is a no-space thing (unless you have spaces in your directory names, in which case they should be quoted). In addition, they need to be colon-sparated. Hence you're looking for things like:
PATH=/bin
PATH=/bin:/sbin
PATH="/bin:/sbin:/directory with spaces in it:$HOME/bin"

The export function will only change the variable for the current terminal session.
Write your PATH inside ~/.bash_profile if you want to change it permanently.
For this modification to work you have to close your current terminal and reopen it.

Related

scp error when defining a "PATH" variable in a bash script

So this is my script
#!/bin/bash
PATH=/SomeFolder/file2.txt;
scp -3 user#server1:/SomeFolder/file.txt user#server2:$PATH;
I get this error
main.sh: line 3: scp: command not found
If I put /SomeFolder/file2.txt in place of of "$PATH" it still doesn't work - same error. It's only after I remove entire second line (PATH definition) does it work.
I simplified my script, the PATH is defined by executing a script inside another server but that doesn't matter. I tested it like what you see and I concluded that the error is due to PATH being defined in the first place.
It is happening because PATH is a system variable that defines directories where the programs and scripts should be looked for. You can view its value by executing echo $PATH. In your script you are setting PATH to /SomeFolder/file2.txt so the program scp that is usually in /usr/bin/ can't be found. Just change the name of variable PATH in your script to something else.

bash script doesn't find mkdir [duplicate]

This question already has answers here:
Getting "command not found" error in bash script
(6 answers)
Closed 2 years ago.
I've created a simple script to check if a folder exists and if not to create it. The script that follow
#!/bin/bash
PATH=~/Dropbox/Web_Development/
FOLDER=Test
if [ ! -d $PATH$FOLDER ]
then
echo $PATH$FOLDER 'not exists'
/bin/mkdir $PATH$FOLDER
echo $PATH$FOLDER 'has been created'
fi
works only if the mkdir command is preceded by /bin/. Failing in that, bash env output the error message "command cannot be found".
I though this could have been related to the system $PATH variable, but it looks regular (to me) and the output is as following:
/Library/Frameworks/Python.framework/Versions/2.7/bin:/bin:/usr/local/bin:/usr/bin:/sbin:/usr/local/sbin:/usr/sbin
I'm not sure whether the order with the different bin folders have been listed make any difference, but the /bin one (where the mkdir on my OSX Maverick) seems to reside is there hence I would expect bash to being able to execute this.
In fact, if I call the bash command from terminal, by typing just mkdir bash output the help string to suggest me how the mkdir command should be used. This suggests me that at a first instance bash is able to recognise the $PATH variable.
So what could be the cause? Is there any relation between the opening statement at the top of my .sh - #!/bin/bash - file and the "default" folder?
Thanks
Yeah, sometimes it is a bad idea to use capital letters for constant variables, because there are some default ones using the same convention. You can see some of the default variables here (Scroll to Special Parameters and Variables section). So it is better to use long names if you don't want to get any clashes.
Another thing to note is that you're trying to replicate mkdir -p functionality, which creates a folder if it does not exist (also it does create all of the parents, which is what you need in most cases)
One more thing - you always have to quote variables, otherwise they get expanded. This may lead to some serious problems. Imagine that
fileToRemove='*'
rm $fileToRemove
This code will remove all files in the current folder, not a file named * as you might expect.
One more thing, you should separate path from a folder with /. Like this "$MY_PATH/$MY_FOLDER". That should be done in case you forget to include / character in your path variable. It does not hurt to have two slashes, that means that /home/////////user/// folder is exactly the same /home/user/ folder.
Sometimes it is tricky to get ~ working, so using $HOME is a bit safer and more readable anyway.
So here is your modified script:
#!/bin/bash
MY_PATH="$HOME/Dropbox/Web_Development/"
MY_FOLDER='Test'
mkdir -p "$MY_PATH/$MY_FOLDER"
The problem is that your script sets PATH to a single directory, and that single directory does not contain a program called mkdir.
Do not use PATH as the name of a variable (use it to list the directories to be searched for commands).
Do learn the list of standard environment variable names and those specific to the shell you use (e.g. bash shell variables). Or use a simple heuristic: reserved names are in upper-case, so use lower-case names for variables local to a script. (Most environment variables are in upper-case — standard or not standard.)
And you can simply ensure that the directory exists by using:
mkdir -p ~/Dropbox/Web_Development
If it already exists, no harm is done. If it does not exist, it is created, and any other directories needed on the path to the directory (eg ~/Dropbox) is also created if that is missing.

Why does this script work in the current directory but fail when placed in the path?

I wish to replace my failing memory with a very small shell script.
#!/bin/sh
if ! [ –a $1.sav ]; then
mv $1 $1.sav
cp $1.sav $1
fi
nano $1
is intended to save the original version of a script. If the original has been preserved before, it skips the move-and-copy-back (and I use move-and-copy-back to preserve the original timestamp).
This works as intended if, after I make it executable with chmod I launch it from within the directory where I am editing, e.g. with
./safe.sh filename
However, when I move it into /usr/bin and then I try to run it in a different directory (without the leading ./) it fails with:
*-bash: /usr/bin/safe.sh: /bin/sh: bad interpreter: Text file busy*
My question is, when I move this script into the path (verified by echo $PATH) why does it then fail?
D'oh? Inquiring minds want to know how to make this work.
The . command is not normally used to run standalone scripts, and that seems to be what is confusing you. . is more typically used interactively to add new bindings to your environment (e.g. defining shell functions). It is also used to similar effect within scripts (e.g. to load a script "library").
Once you mark the script executable (per the comments on your question), you should be able to run it equally well from the current directory (e.g. ./safe.sh filename) or from wherever it is in the path (e.g. safe.sh filename).
You may want to remove .sh from the name, to fit with the usual conventions of command names.
BTW: I note that you mistakenly capitalize If in the script.
The error bad interpreter: Text file busy occurs if the script is open for write (see this SE question and this SF question). Make sure you don't have it open (e.g. in a editor) when attempting to run it.

Deleting a directory contents using shell scripts

I am a newbie to Shell scripting. I want to delete all the contents of a directory which is in HOME directory of the user and deleting some files which are matching with my conditions. After googled for some time, i have created the following script.
#!/bin/bash
#!/sbin/fuser
PATH="$HOME/di"
echo "$PATH";
if [ -d $PATH ]
then
rm -r $PATH/*
fuser -kavf $PATH/.n*
rm -rf $PATH/.store
echo 'File deleted successfully :)'
fi
If I run the script, i am getting error as follows,
/users/dinesh/di
dinesh: line 11: rm: command not found
dinesh: line 12: fuser: command not found
dinesh: line 13: rm: command not found
File deleted successfully :)
Can anybody help me with this?
Thanks in advance.
You are modifying PATH variable, which is used by the OS defines the path to find the utilities (so that you can invoke it without having to type the full path to the binary). The system cannot find rm and fuser in the folders currently specified by PATH (since you overwritten it with the directory to be deleted), so it prints the error.
tl;dr DO NOT use PATH as your own variable name.
PATH is a special variable that controls where the system looks for command executables (like rm, fuser, etc). When you set it to /users/dinesh/di, it then looks there for all subsequent commands, and (of course) can't find them. Solution: use a different variable name. Actually, I'd recommend using lowercase variables in shell scripts -- there are a number of uppercase reserved variable names, and if you try to use any of them you're going to have trouble. Sticking to lowercase is an easy way to avoid this.
BTW, in general it's best to enclose variables in double-quotes whenever you use them, to avoid trouble with some parsing the shell does after replacing them. For example, use [ -d "$path" ] instead of [ -d $path ]. $path/* is a bit more complicated, since the * won't work inside quotes. Solution: rm -r "$path"/*.
Random other notes: the #!/sbin/fuser line isn't doing anything. Only the first line of the script can act as a shebang. Also, don't bother putting ; at the end of lines in shell scripts.
#!/bin/bash
path="$HOME/di"
echo "$path"
if [ -d "$path" ]
then
rm -r "$path"/*
fuser -kavf "$path"/.n*
rm -rf "$path/.store"
echo 'File deleted successfully :)'
fi
This line:
PATH="$HOME/di"
removes all the standard directories from your PATH (so commands such as rm that are normally found in /bin or /usr/bin are 'missing'). You should write:
PATH="$HOME/di:$PATH"
This keeps what was already in $PATH, but puts $HOME/di ahead of that. It means that if you have a custom command in that directory, it will be invoked instead of the standard one in /usr/bin or wherever.
If your intention is to remove the directory $HOME/di, then you should not be using $PATH as your variable. You could use $path; variable names are case sensitive. Or you could use $dir or any of a myriad other names. You do need to be aware of the key environment variables and avoid clobbering or misusing them. Of the key environment variables, $PATH is one of the most key ($HOME is another; actually, after those two, most of the rest are relatively less important). Conventionally, upper case names are reserved for environment variables; use lower case names for local variables in a script.

Does Bash have version issues that would prevent me from executing files?

I've created a bash shell script file that I can run on my local bash (version 4.2.10) but not on a remote computer (version 3.2). Here's what I'm doing
A script file (some_script.sh) exists in a local folder
I've done $ chmod 755 some_script.sh to make it an executable
Now, I try $ ./some_script.sh
On my computer, this runs fine. On the remote computer, this returns a Command not found error:
./some_script.sh: Command not found.
Also, in the remote version, executable files have stars(*) following their names. Don't know if this makes any difference but I still get the same error when I include the star.
Is this because of the bash shell version? Any ideas to make it work?
Thanks!
The command not found message can be a bit misleading. The "command" in question can be either the script you're trying to execute or the shell specified on the shebang line.
For example, on my system:
% cat foo.sh
#!/no/such/dir/sh
echo hello
% ./foo.sh
./foo.sh: Command not found.
./foo.sh clearly exists; it's the interpreter /no/such/dir/sh that doesn't exist. (I find that the error message varies depending on the shell from which you invoke foo.sh.)
So the problem is almost certainly that you've specified an incorrect interpreter name on line one of some_script.sh. Perhaps bash is installed in a different location (it's usually /bin/bash, but not always.)
As for the * characters in the names of executable files, those aren't actually part of the file names. The -F option to the ls command causes it to show a special character after certain kinds of files: * for executables, / for directories, # for symlinks, and so forth. Probably on the remote system you have ls aliased to ls -F or something similar. If you type /bin/ls, bypassing the alias, you should see the file names without the append * characters; if you type /bin/ls -F, you should see the *s again.
Adding a * character in a command name doesn't do what you think it's doing, but it probably won't make any difference. For example, if you type
./some_script.sh*
the * is a wild card, and the command name expands to a list of all files in the current directory whose names match the pattern (this is completely different from the meaning of * as an executable file in ls -F output). Chances are there's only one such file, so
./some_script.sh* is probably equivalent to ./some_script.sh. But don't type the *; it's unnecessary and can cause unexpected results.

Resources