Is it considered good practice to use binaries with their full pathname in shell scripts? - bash

I would like to write shell scripts in a way considered good practice.
An experienced programmer friend advised to use the full pathname for each external command to avoid problems with aliases, functions et al, happening to use the same name as an existing binary, maybe even for malicious reasons. I understand the argument, but short commands (in $PATH) get long very quickly, like:
sudo socketfilterfw --setloggingmode on
becomes
/usr/bin/sudo /usr/libexec/ApplicationFirewall/socketfilterfw --setloggingmode on
This makes quickly grasping what a script does a little harder for me. But maybe I just need to get used to this.
Looking at examples of scripts on github, I do find people doing the same, but most do not.
Is using the full path to a binary considered "good practice"?

No, the generally recommended practice is to rely on the PATH to be correct; or sometimes, if you know the expected location of a program which is not typically already on the PATH, to augment the PATH;
PATH="$PATH:/usr/libexec/ApplicationFirewall"
sudo socketfilterfw --setloggingmode on
Hardcoding the path to a binary means you cannot easily replace it with a customized wrapper for local administrative purposes or debugging; it simply makes everyone's lives harder.
As an aside, a common (but harmless) error is to needlessly export the PATH. Unless you need child processes of the script to inherit the variable, there is no need to export it. (And in practice, you can often be fairly sure the user will already have done that in their login shell; though for system processes which are not always run from an interactive shell, this is not necessarily a given.)

Related

Why is eval evil in makefiles

I have had several people tell me at this point that eval is evil in makefiles. I originally took their word for it, but now I'm starting to question it. Take the following makefile:
%.o:
$(eval targ=$*)
echo making $targ
%.p:
echo making $*
I understand that if you then did make "a;blah;".o, then it would run blah (Which could be an rm -rf \, or worse). However, if you ran make "a;blah;".p you would get the same result without the eval. Furthermore, if you have permissions to run make, you would also have permissions to run blah directly as well, and wouldn't need to run make at all. So now I'm wondering, is eval really an added security risk in makefiles, and if so, what should be avoided?
Why is eval evil?
Because it grants a whole power of language to things you actually don't want to give that power.
Often it is used as "poor man's metaprogramming" to construct some piece of code and then run it. Often it looks like eval("do stuff with " + thing) - and thing is only known during runtime, because it gets supplied from outside.
However, if you don't make sure that thing belongs to some tiny subset of language you need in that particular case (like, is a string representation of one valid name), your code would grant permissions to stuff you didn't intend to. For example, if thing is "apples; steal all oranges" then oranges would be stolen.
If you do make sure that thing belongs to some subset of language you actually need then 2 problems arise:
You are reimplementing language features (parsing source) which is not DRY and is often a sign of misusing a language.
If you resort to this that means simpler means are not suitable and your use case is somewhat complicated which makes validating your input harder.
Thus, it's really easy to break security with eval and taking enough precautions to make it safe is hard, that's why if you see an eval you should suspect possible security flaw. That's just a heuristic, not a law.
eval is a very powerful tool - as powerful as the whole language - and it's too easy to shoot your leg off with it.
Why this particular use of eval is not good?
Imagine a task that requires making some steps that depend on a file. Task can be done with various files. (like, user gives Virtualbox image of a machine that is to be brought up and integrated into existing network infrastructure)
Imagine, say, lazy administrator that automated this task - all commands are written in a makefile because it fits better than sh script (some steps depend on other and sometimes don't need to be re-done).
Administrator made sure that all commands are ok and correct and had given sudoers permission to run make with that specific makefile. Now, if makefile contains string like yours then using properly crafted name for your Virtualbox image you could pwn the system, or something like that.
Of course, I had to stretch far to make this particular case a problem, but it's a potential problem anyway.
Makefiles usually offer simple contracts: you name the target and some very specific stuff - written in makefile - gets done. Using eval the way you've used it offers a different contract: the same stuff as above but you also can supply commands in some complicated way and they would get executed too.
You could try patching the contract by making sure that $* would not cause any trouble. Describing what that means exactly could be an interesting exercise in language if you want to keep as much flexibility in target names as possible.
Otherwise, you should be aware of extended contract and don't use solutions like this in cases where that extension would cause problems. If you intend your solution to be reusable by as many people as possible, you should make its contract cause as little problems as possible, too.

Shell Script unit testing: How to mockup a complex utility program

I'm into unit testing of some legacy shell scripts.
In the real world scripts are often used to call utility programs
like find, tar, cpio, grep, sed, rsync, date and so on with some rather complex command lines containing a lot of options. Sometimes regular expressions or wildcard patterns are constructed and used.
An example: A shell script which is usually invoked by cron in regular intervals has the task to mirror some huge directory trees from one computer to another using the utility rsync.
Several types of files and directories should be excluded from the
mirroring process:
#!/usr/bin/env bash
...
function mirror() {
...
COMMAND="rsync -aH$VERBOSE$DRY $PROGRESS $DELETE $OTHER_OPTIONS \
$EXCLUDE_OPTIONS $SOURCE_HOST:$DIRECTORY $TARGET"
...
if eval $COMMAND
then ...
else ...
fi
...
}
...
As Michael Feathers wrote in his famous book Working Effectively with Legacy Code, a good unit test runs very fast and does not touch the network, the file-system or opens any database.
Following Michael Feathers advice the technique to use here is: dependency injection. The object to replace here is utility program rsync.
My first idea: In my shell script testing framework (I use bats) I manipulate $PATH in a way that a mockup rsync is found instead of
the real rsync utility. This mockup object could check the supplied command line parameters and options. Similar with other utilities used in this part of the script under test.
My past experience with real problems in this area of scripting were often bugs caused by special characters in file or directory names, problems with quoting or encodings, missing ssh keys, wrong permissions and so on. These kind of bugs would have escaped this technique of unit testing. (I know: for some of these problems unit testing is simply not the cure).
Another disadvantage is that writing a mockup for a complex utility like rsync or find is error prone and a tedious engineering task of its own.
I believe the situation described above is general enough that other people might have encountered similar problems. Who has got some clever ideas and would care to share them here with me?
You can mockup any command using a function, like this:
function rsync() {
# mock things here if necessary
}
Then export the function and run the unittest:
export -f rsync
unittest
Cargill's quandary:
" Any design problem can be solved by adding an additional level of indirection, except for too many levels of indirection."
Why mock system commands ? After all if you are programming Bash, the system is your target goal and you should evaluate your script using the system.
Unit test, as the name suggests, will give you a confidence in a unitary part of the system you are designing. So you will have to define what is your unit in the case of a bash script. A function ? A script file ? A command ?
Given you want to define the unit as a function I would then suggest writing a list of well known errors as you listed above:
Special characters in file or directory names
Problems with quoting or encodings
Missing ssh keys
Wrong permissions and so on.
And write a test case for it. And try to not deviate from the system commands, since they are integral part of the system you are delivering.

Determine if bash/zsh/etc. is running under Midnight Commander

Simple question. I'd like to know how to tell whether the current shell is running as a mc subshell or not. If it is, I'd like to enter a degraded mode without some features mc can't handle.
In particular, I'd like this to
Be as portable as possible
Not rely on anything outside the shell and basic universal external commands.
Though it's not documented in the man page, a quick experiment shows that mc sets two environment variables: $MC_TMPDIR and $MC_SID. (It also sets $HISTCONTROL, but that's not specific to mc; it affects the behavior of bash, and could have been set by something other than mc.)
If you don't want to depend on undocumented features, you can always set an environment variable yourself. For example, in bash:
mc() { MC_IS_RUNNING=1 command mc "$#" ; }
Entering a "degraded mode" is another matter; I'm not sure how you'd do that. I don't know of any way in bash to disable specified features. You could disable selected built-in commands by defining functions that override them. What features do you have in mind?

syntax-check a VimL script

I have a sizable vim script (a .vim file, in viml syntax). I'd like to check (but not execute!) the file for simple syntax errors.
How do I accomplish this?
I just want a very rough syntax check. Something along the lines of perl -c or pyflakes.
Here is a syntax checker for VimL.
https://github.com/syngan/vim-vimlint/
I don't think (I'm relatively sure, as much as one can be) one exists. VimL is an internal language of Vim (and only Vim), and there aren't many tools developed for it.
I tried searching on vim.org and several other places, with no luck. Not suprising, because I've never heard of one either.
So you're either stuck with running the script, or switching to an outside language like Python, Perl or Ruby.
https://github.com/osyo-manga/vim-watchdogs
vim-watchdogs, apparently, is a syntax checker for vim, it says that it supports many languages, including vimL
if you use vundle, you can just drop this into your vimrc:
Plugin 'git://github.com/osyo-manga/vim-watchdogs.git'
..and then run:
:PluginInstall
..to set it up (vundle is a very nifty plugin manager) If you have syntastic, you might want to be careful and disable it first, and then see if it is an adequate replacement (since it says it supports all those languages anyway).
It is a safe bet that when you have multiple syntax checkers going, you will need to put your "dogs on a leash", so to speak; by configuring one to check languages that the other one does not, and vice-versa. If you do not, there will be at best collisions, duplications, or misdirections. At worst, you will have all of the above and more.
Make sure that you always backup your ~/.vim directory (or your VIMRUNTIME directory if you install things on a global level), you will be glad you did. Hope that helped you or someone else out, good luck! Sorry you had to wait 7.5 months for a response, heh :)
There's now a second option: vim-lint (as opposed to vimlint)

Compilers for shell scripts

Do you know if there's any tool for compiling bash scripts?
It doesn't matter if that tool is just a translator (for example, something that converts a bash script to a C program), as long as the translated result can be compiled.
I'm looking for something like shc (it's just an example -- I know that shc doesn't work as a compiler). Are there any other similar tools?
A Google search brings up CCsh, but it will set you back $50 per machine for a license.
The documentation says that CCsh compiles Bourne Shell (not bash ...) scripts to C code and that it understands how to replicate the functionality of 50 odd standard commands avoiding the need to fork them.
But CCsh is not open source, so if it doesn't do what you need (or expect) you won't be able to look at the source code to figure out why.
I don't think you're going to find anything, because you can't really "compile" a shell script. You could write a simple script that converts all lines to calls to system(3), then "compile" that as a C program, but this wouldn't have a major performance boost over anything you're currently using, and might not handle variables correctly. Don't do this.
The problem with "compiling" a shell script is that shell scripts just call external programs.
In theory you could actually get a good performance boost.
Think of all the
if [ x"$MYVAR" == x"TheResult" ]; then echo "TheResult Happened" fi
(note invocation of test, then echo, as well as the interpreting needed to be done.)
which could be replaced by
if ( !strcmp(myvar, "TheResult") ) printf("TheResult Happened");
In C: no process launching, no having to do path searching. Lots of goodness.

Resources