Run scripts without typing the extension - bash

I use git-bash on windows.
Is there a way to make it run windows batches (.cmd) without typing file extensions?
And the same question about *.sh files. I prefer putting bash scripts into .sh to distinguish them in the explorer easier.
PS. Aliases or proxy-scripts without extension are not welcomed.
Hopefully, there is some geek's way to do this. So, still looking for an answer...

You could any flavour of bash (cmder, git-bash, ms bash, cygwin, gnu-utilities…) which will let you use any unix command without extension, even if stored as a .exe.
Take attention to the CR/LF, and use dos2unix at each shell script creation to prevent misinterpretation of line ending.
You won't be able to use .sh file without extension… but to create an alias for each script :
$ alias myscript="/usr/local/bin/myscript.sh"
Put them in your ~/.bashrc

First of all a bit of content about file extension:
A filename extension is an identifier specified as a suffix to the
name of a computer file. The extension indicates a characteristic of
the file contents or its intended use. A file extension is typically
delimited from the filename with a full stop (period), but in some
systems it is separated with spaces.
Some file systems implement filename extensions as a feature of the
file system itself and may limit the length and format of the
extension, while others treat filename extensions as part of the
filename without special distinction.
Filesystems for UNIX-like operating systems (opposed to DOS/Windows) do not separate the
extension metadata from the rest of the file name. The dot character
is just another character in the main filename, and filenames can have
multiple extensions, usually representing nested transformations, ...
Regarding your shell scripts:
So basically in Unix file extension are not important/not mandatory so you can directly omit them. If for any reason you want to keep them (and I believe that you should) then you can define an alias to them. (refer to https://askubuntu.com/questions/17536/how-do-i-create-a-permanent-bash-alias)
You must also keep in mind the EOL char ('\n' vs '\r\n' that differ between Unix and Windows.
Regarding your windows batches, you can not run them directly in a Unix like environment so you will not be able to run them at the same time from your git bash except if you use a tool like GH4W (github generate ssh key on windows) or use git-cmd.bat (What is the exact meaning of Git Bash?)

Maybe I've found a solution that requires some additional coding. Instead of insulting the user when typing a non-existent command, you could try to rewrite the code so it executes a command by adding '.sh' or '.cmd'.
GitGub: insult-linux-unix-bash-user-when-typing-wrong-command
Also do a search for the command_not_found_handle. This is available on most Linux systems and might or might not be available for git-bash.

This is something I was looking for myself and following on from FithAxiom's answer, have found a solution. It feels ugly, because it basically means overriding the "command not found" handling, and searching for the file ourselves. But it does achieve the desired effect.
This is a modification of an answer given in another thread. You add this to your .bashrc file. You can modify it to suit your needs (and improvements are also welcome).
command_not_found_handle()
{
cmd=$1
shift
args=( "$#" )
IFS=:
for dir in $PATH; do
if [ -f $dir/$cmd.cmd ]; then { "$dir/$cmd.cmd" "${args[#]}"; return; }
elif [ -f $dir/$cmd.ps1 ]; then { powershell.exe -file "$dir/$cmd.ps1" "${args[#]}"; return; }
elif [ -f $dir/$cmd.sh ]; then { "$dir/$cmd.sh" "${args[#]}"; return; }
fi
done
# Replicate standard "command not found" error
echo "bash: $1: command not found" >&2
return 127
}

You can create a file ~/.bashrc and override the command not found handler. Bash let you handle it. Read it for more info.
In .bashrc file you can update the command_not_found_handle function provided by bash.
command_not_found_handle() {
unset -f command_not_found_handle
command="$1".sh
shift
exec "$command" "$#"
}

Related

Trying to run a function in the Bash shell gives unexpected results

I have been trying to batch convert a bunch of really old MS office files to odf formats for archival purposes, using libreoffice from the command line. For this purpose I first gather all the files in a single directory and then invoke the following command (for doc files) within said directory:
/path/to/soffice --headless --convert-to odt *doc
This works well, and the command results in all doc files within the directory being converted in one go. I want to however avoid having to always type out the path to soffice with the necessary parameters, so I added the following to my Bash profile:
alias libreconv='function _libreconv(){ /path/to/soffice --headless --convert-to "$1" "$2"; }; _libreconv'
However, when I now try to invoke the following:
libreconv odt *doc
this results in only the first doc file in the directory being converted, after which the the function exits and returns me to prompt... Maybe I am missing something obvious (I am a cli newb after all), but I do not understand why invoking the function results in only the first file being converted versus all files when I run the soffice command directly.
Thanks in advance for any aid helping me understand what is going wrong here. :)
Because your function only accepts two parameters.
Probably don't hardcode the path to soffice; instead, make sure your PATH includes the directory where it's installed.
The alias is completely useless here anyway; see also Why would I create an alias which creates a function?
If you wanted to create a function, try something like
libreconv () { soffice --headless --convert-to "$#"; }
The arguments "$1" and "$2" literally expand to the first two arguments. The argument "$#" expands to all the arguments, with quoting preserved (this is important if you want to handle file names with spaces in them etc; you see many scripts which incorrectly use "$*" or $# without the quotes).
Tangentially, if soffice is in a weird place which you don't want in your PATH, add a symlink to it in a directory which is in your PATH. A common arrangement is to have ~/bin and populate it with symlinks to odd binaries, including perhaps scripts of your own which are installed for development in a Git working directory somewhere.
A common incantation to have in your .bash_profile or similar is
if [[ -d ~/bin ]]; then
case :$PATH: in
*:~/bin:* | *:$HOME/bin:* ) ;;
*) PATH=~/bin:$PATH;;
esac
fi
With that, you can (create ~/bin if it doesn't exist; mkdir ~/bin) and ln -s /path/to/soffice ~/bin to create a symlink to the real location.

How to make a bash command executable from any directory in Windows?

I have made a few bash scripts that I have saved to individual folders. I have Windows 10. The scripts have functions that executes commands in bash. I am now able to execute these .sh scripts from any directory, since I have added the folders they are saved in to the path variable. But I want to make it even easier for me, and be able to only have to type the function in the bash console to execute the command.
This is an example of one of the scripts. It is saved as file_lister.sh. I am able to run this by typing "file_lister.sh" but I want to run it by only typing the function name in the script, which is "list_files". How do I do this? Thanks in advance.
#!/bin/bash
function list_files(){
cp C:/Users/jmell/OneDrive/projects/file_lister/file_lister.py file_lister.py
python file_lister.py
cwd=$(pwd)
if [ $cwd != "/c/Users/jmell/OneDrive/projects/file_lister" ]
then
rm file_lister.py
fi
}
list_files
Unless you source all of your scripts (e.g. in your .bashrc file), the functions won't be defined. However, your function is doing a lot of extra work that it really shouldn't be. The example script can be reduced to
#!/bin/bash
python C:/Users/jmell/OneDrive/projects/file_lister/file_lister.py
Better, yet, keep in mind that the shebang line is read and stripped off by the shell. It specifies the interpreter to use. Instead of creating a wrapper script, add the following line to file_lister.py:
#!/path/to/python
At that point, I'd also recommend renaming file_lister.py to just file_lister.

Why does this script work in the current directory but fail when placed in the path?

I wish to replace my failing memory with a very small shell script.
#!/bin/sh
if ! [ –a $1.sav ]; then
mv $1 $1.sav
cp $1.sav $1
fi
nano $1
is intended to save the original version of a script. If the original has been preserved before, it skips the move-and-copy-back (and I use move-and-copy-back to preserve the original timestamp).
This works as intended if, after I make it executable with chmod I launch it from within the directory where I am editing, e.g. with
./safe.sh filename
However, when I move it into /usr/bin and then I try to run it in a different directory (without the leading ./) it fails with:
*-bash: /usr/bin/safe.sh: /bin/sh: bad interpreter: Text file busy*
My question is, when I move this script into the path (verified by echo $PATH) why does it then fail?
D'oh? Inquiring minds want to know how to make this work.
The . command is not normally used to run standalone scripts, and that seems to be what is confusing you. . is more typically used interactively to add new bindings to your environment (e.g. defining shell functions). It is also used to similar effect within scripts (e.g. to load a script "library").
Once you mark the script executable (per the comments on your question), you should be able to run it equally well from the current directory (e.g. ./safe.sh filename) or from wherever it is in the path (e.g. safe.sh filename).
You may want to remove .sh from the name, to fit with the usual conventions of command names.
BTW: I note that you mistakenly capitalize If in the script.
The error bad interpreter: Text file busy occurs if the script is open for write (see this SE question and this SF question). Make sure you don't have it open (e.g. in a editor) when attempting to run it.

Using the bash eval builtin to allow ini files to have command strings in them

I've been putting together a bash script that takes an ini file (with a format that I've been developing alongside the script) and reads through the file, performing the actions specified.
One of the functions in the ini format allows for a shell command to be passed in and run using eval. I'm running into a problem when the commands contain a variable name.
eval (or the shell in general) doesn't seem to be substituting the values correctly and most of the time it seems to replace all the variable names with blanks, breaking the command. Subshells to create a string output seem to have the same problem.
The strange part is that this worked on my development machine (Running linux mint 13), but when I moved the script to the target machine running CentOS 5.8, these issues showed up.
Some examples of code I read in from the ini file:
shellcmd $toolspath/program > /path/file
shellcmd parsedata=$( cat /path/file )
These go through a script function that strips off the leading shellcmd and then evals the string using
eval ${scmd}
Any ideas on what might be causing the weird behavior and anything I can try to resolve the problem? My ultimate goal here is to have the ability to read in a line from a file and have my script execute it and be able to correctly handle script variables from the read in command.
Using Bash 3.2.25 (CentOS 5) I tried this, and it works fine:
toolspath='/bin'
while read prefix scmd
do
if [[ $prefix == 'shellcmd' ]]
then
echo "Evaluating: <$scmd>"
eval ${scmd}
else
echo "$prefix ignored"
fi
done < ini
with:
shellcmd $toolspath/ls > /home/user1/file
shellcmd parsedata=$( cat /home/user1/file )
shellcmd echo $parsedata
I obviously had to set paths. Most likely you had to change the paths when you switched machines. Do your paths have embedded spaces?
How did you transfer the files? Did you perchance go via Windows? On a whim I did a unix2dos on the ini file and I got similar symptoms to that you describe. That's my best guess.
I found a suitable alternative for the one command that what causing this issue so I'll mark this question as solved.
From all my investigation it appears that I've discovered an obscure bug in the bash shell. The particular command I was trying to eval returned a terminal code in it's output, and because the shell was in a read loop with input redirected from a file it resulted in some strange behavior. My solution was to move the call to this particular command outside of the read loop. It still doesn't solve the root problem, which I believe to be a bug in the bash shell. Hope this will help someone else who has run into this same (obscure) issue.

In bash2, how do I find the name of a script being sourced?

Here's my situation:
I have multiple versions of a script in source code control where the name differs by a path name component (ex: scc/project/1.0/script and scc/project/1.1/script). In another directory, there is a symlink to one of these scripts. The symlink name is not related to the script name, and in some cases may change periodically. That symlink, in turn, is sourced by bash using the '.' command.
What I need to know: how do I determine the directory of the referenced script, on a 10 year-old system with Bash 2 and Perl 5.5? For various reasons, the system must be used, and it cannot be upgraded.
In Bash 3 or above, I use this:
dir=`perl -MCwd=realpath -MFile::Basename 'print dirname(realpath($ARGV[0]))' ${BASH_SOURCE[0]} $0`
Apologies for the Perl one-liner - this was originally a pure-Perl project with a very small amount of shell script glue.
I've been able to work around the fact that the ancient Perl I am using doesn't export "realpath" from Cwd, but unfortunately, Bash 2.03.01 doesn't provide BASH_SOURCE, or anything like it that I've been able to find. As a workaround, I'm providing the path information in a text file that I change manually when I switch branches, but I'd really like to make this figure out which branch I'm using on its own.
Update:
I apologize - apparently, the question as asked is not clear. I don't know in every case what the name of the symlink will be - that's what I'm trying to find out at run time. The script is occasionally executed via the symlink directly, but most often the link is the argument to a "." command running in another script.
Also, $0 is not set appropriately when the script is sourced via ".", which is the entire problem I'm trying to solve. I apologize for bluntness, but no solution that depends entirely upon $0 being set is correct. In the Perl one-liner, I use both BASH_SOURCE and $0 (BASH_SOURCE is only set when the script is sourced via ".", so the one-liner only uses $0 when it's not sourced).
Try using $0 instead of ${BASH_SOURCE[0]}. (No promises; I don't have a bash 2 around.)
$0 has the name of the program/script you are executing.
Is stat ok? something like
stat -c %N $file
bash's cd and pwd builtins have a -P option to resolve symlinks, so:
dir=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
works with bash 2.03
I managed to get information about the porcess sourcing my script using this command:
ps -ef | grep $$
This is not perfect but tells your which is the to process invoking your script. It migth be possible with some formating to determine the exact source.

Resources