Shell Script unit testing: How to mockup a complex utility program - bash

I'm into unit testing of some legacy shell scripts.
In the real world scripts are often used to call utility programs
like find, tar, cpio, grep, sed, rsync, date and so on with some rather complex command lines containing a lot of options. Sometimes regular expressions or wildcard patterns are constructed and used.
An example: A shell script which is usually invoked by cron in regular intervals has the task to mirror some huge directory trees from one computer to another using the utility rsync.
Several types of files and directories should be excluded from the
mirroring process:
#!/usr/bin/env bash
...
function mirror() {
...
COMMAND="rsync -aH$VERBOSE$DRY $PROGRESS $DELETE $OTHER_OPTIONS \
$EXCLUDE_OPTIONS $SOURCE_HOST:$DIRECTORY $TARGET"
...
if eval $COMMAND
then ...
else ...
fi
...
}
...
As Michael Feathers wrote in his famous book Working Effectively with Legacy Code, a good unit test runs very fast and does not touch the network, the file-system or opens any database.
Following Michael Feathers advice the technique to use here is: dependency injection. The object to replace here is utility program rsync.
My first idea: In my shell script testing framework (I use bats) I manipulate $PATH in a way that a mockup rsync is found instead of
the real rsync utility. This mockup object could check the supplied command line parameters and options. Similar with other utilities used in this part of the script under test.
My past experience with real problems in this area of scripting were often bugs caused by special characters in file or directory names, problems with quoting or encodings, missing ssh keys, wrong permissions and so on. These kind of bugs would have escaped this technique of unit testing. (I know: for some of these problems unit testing is simply not the cure).
Another disadvantage is that writing a mockup for a complex utility like rsync or find is error prone and a tedious engineering task of its own.
I believe the situation described above is general enough that other people might have encountered similar problems. Who has got some clever ideas and would care to share them here with me?

You can mockup any command using a function, like this:
function rsync() {
# mock things here if necessary
}
Then export the function and run the unittest:
export -f rsync
unittest

Cargill's quandary:
" Any design problem can be solved by adding an additional level of indirection, except for too many levels of indirection."
Why mock system commands ? After all if you are programming Bash, the system is your target goal and you should evaluate your script using the system.
Unit test, as the name suggests, will give you a confidence in a unitary part of the system you are designing. So you will have to define what is your unit in the case of a bash script. A function ? A script file ? A command ?
Given you want to define the unit as a function I would then suggest writing a list of well known errors as you listed above:
Special characters in file or directory names
Problems with quoting or encodings
Missing ssh keys
Wrong permissions and so on.
And write a test case for it. And try to not deviate from the system commands, since they are integral part of the system you are delivering.

Related

How can I generate a list of every valid syntactic operator in Bash including input and output?

According to the Bash Reference Manual, the Bash scripting language is constituted of 4 distinct subclasses of syntactic elements:
built-in commands (alias, cd)
reserved words (if, function)
parameters and variables ($, IFS)
functions (abort, end-of-file - activated with keybindings such as Ctrl-d)
Apart from reading the manual, I became inherently curious if there was a programmatic way to list out or generate all such keywords, at least from one of the above categories. I think this could be useful in some contexts. Sometimes I wish I could see all the options available to me for what I can write in any given moment, and having that information as data, instead of a formatted manual, is convenient, focused, and can be edited, in case you want to strike out commands you know well, or that are too obscure for now.
My understanding is that Bash takes the input into stdin and passes it to the running shell process. When code is distributed in a production-ready form, it is compiled, so it runs faster. Unlike using a Python REPL, you don’t have access to the Bash source code from within Bash, so it is not a very direct route to write a program that searches through source files to find various defined commands. I mean that if you wanted to list all functions, Python has the dir() function which programmatically looks for function names in the namespace. But I don’t think Bash can do that. I think it doesn’t have a special syntax in its source files which makes it easy to find and identify all the keywords. Instead, they will be found if you simply enter them - like cd will “find” the program cd because $PATH returns the path to that command - but there’s no special way to discover them.
Or am I wrong? Technically, you could run a “brute force” search by generating every combination of symbols of every length and record when you did not get “error: unknown command” as a response.
Is there any other clever programmatic way to do this?
I mean I want to see a list of every symbol or string that the bash
compiler
Bash is not a compiler. It and every other shell I know are interpreters of various languages.
recognises and knows what to do with, including commands like
“ls” or just a symbol like “*”. I also want to see the inputs and
outputs for each symbol, i.e., some commands are executed in the shell
prompt by themselves, but what data type do they return?
All commands executed by the shell have an exit status, which is a number between 0 and 255. This is as close to a "return type" as you get. Many of them also produce idiosyncratic output to one or two streams (a standard output stream and a standard error stream) under some conditions, and many have other effects on the shell environment or operating environment.
And some
require a certain data type to standard input.
I can't think of a built-in utility whose expected input is well characterized as having a particular data type. That's not really a stream-oriented concept.
I want to do this just as a rigorous way to study the language.
If you want to rigorously study the language, then you should study its manual, where everything you describe has already been compiled. You might also want to study the POSIX shell command language manual for a slightly different perspective, which is more thorough in some areas, though what it documents differs in a few details from Bash's default behavior.
If you want to compile your own summary of Bash syntax and behavior, then those are the best source materials for such an effort.
You can get a list of all reserved words and syntactic elements of bash using this trick:
help -s '*' | cut -d: -f1
Or more accurately:
help -s \* | awk -F ': ' 'NR>2&&!/variables/{print $1}'

Is it considered good practice to use binaries with their full pathname in shell scripts?

I would like to write shell scripts in a way considered good practice.
An experienced programmer friend advised to use the full pathname for each external command to avoid problems with aliases, functions et al, happening to use the same name as an existing binary, maybe even for malicious reasons. I understand the argument, but short commands (in $PATH) get long very quickly, like:
sudo socketfilterfw --setloggingmode on
becomes
/usr/bin/sudo /usr/libexec/ApplicationFirewall/socketfilterfw --setloggingmode on
This makes quickly grasping what a script does a little harder for me. But maybe I just need to get used to this.
Looking at examples of scripts on github, I do find people doing the same, but most do not.
Is using the full path to a binary considered "good practice"?
No, the generally recommended practice is to rely on the PATH to be correct; or sometimes, if you know the expected location of a program which is not typically already on the PATH, to augment the PATH;
PATH="$PATH:/usr/libexec/ApplicationFirewall"
sudo socketfilterfw --setloggingmode on
Hardcoding the path to a binary means you cannot easily replace it with a customized wrapper for local administrative purposes or debugging; it simply makes everyone's lives harder.
As an aside, a common (but harmless) error is to needlessly export the PATH. Unless you need child processes of the script to inherit the variable, there is no need to export it. (And in practice, you can often be fairly sure the user will already have done that in their login shell; though for system processes which are not always run from an interactive shell, this is not necessarily a given.)

Why does Scala use a reversed shebang (!#) instead of just setting interpreter to scala

The scala documentation shows that the way to create a scala script is like this:
#!/bin/sh
exec scala "$0" "$#"
!#
/* Script here */
I know that this executes scala with the name of the script file and the arguments passed to it, and that the scala command apparently knows to read a file that starts like this and ignore everything up to the reversed shebang !#
My question is: is there any reason why I should use this (rather verbose) format for a scala script, rather than just:
#!/bin/env scala
/* Script here */
This, as far a I can tell from a quick test, does exactly the same thing, but is less verbose.
How old is the documentation? Usually, this sort of thing (often referred to as 'the exec hack') was recommended before /bin/env was common, and this was the best way to get the functionality. Note that /usr/bin/env is more common than /bin/env, and ought to be used instead.
Note that it's /usr/bin/env, not /bin/env.
There are no benefits to using an intermediate shell instead of /usr/bin/env, except running in some rare antique Unix variants where env isn't in /usr/bin. Well, technically SCO still exists, but does Scala even run there?
However the advantage of the shell variant is that it gives an opportunity to tune what is executed, for example to add elements to PATH or CLASSPATH, or to add options such as -savecompiled to the interpreter (as shown in the manual). This may be why the documentation suggests the shell form.
I am not on the Scala development team and I don't know what the historical motivation for the Scala documentation was.
Scala did not always support /usr/bin/env. No particular reason for it, just, I imagine, the person who wrote the shell scripting support was not familiar with that syntax, back in the mid 00's. The documentation followed what was supported, and I added /usr/bin/env support at some point (iirc), but never bothered changing the documentation, it would seem.

Is there any shell script and/or Makefile static code analyser?

Or how can I ensure reliability of my Makefiles/scripts?
Update: by shell scripts I mean sh dialect (bash, zsh, whatever), by Makefiles I mean GNU make. I know, they are different beasts, but they have many in common.
P. S. Yeah, I know, static code analysis can't verify all possible cases, and that I need to write my Makefiles and shell script in a way, that would be reliable. I just need tool, that will tell me, when I use bad practices, when I forgot about them or didn't notice in big script. Not fix errors for me, but just take second look.
For sh scripts, ShellCheck will do some static analysis checks, like detecting when variable modifications are hidden by subshells, when you accidentally use [ $foo=bar ] or when you neglect to quote variables that could contain spaces. It also comments on some stylistic issues like useless use of cat or using sed when you could use parameter expansion.

Compilers for shell scripts

Do you know if there's any tool for compiling bash scripts?
It doesn't matter if that tool is just a translator (for example, something that converts a bash script to a C program), as long as the translated result can be compiled.
I'm looking for something like shc (it's just an example -- I know that shc doesn't work as a compiler). Are there any other similar tools?
A Google search brings up CCsh, but it will set you back $50 per machine for a license.
The documentation says that CCsh compiles Bourne Shell (not bash ...) scripts to C code and that it understands how to replicate the functionality of 50 odd standard commands avoiding the need to fork them.
But CCsh is not open source, so if it doesn't do what you need (or expect) you won't be able to look at the source code to figure out why.
I don't think you're going to find anything, because you can't really "compile" a shell script. You could write a simple script that converts all lines to calls to system(3), then "compile" that as a C program, but this wouldn't have a major performance boost over anything you're currently using, and might not handle variables correctly. Don't do this.
The problem with "compiling" a shell script is that shell scripts just call external programs.
In theory you could actually get a good performance boost.
Think of all the
if [ x"$MYVAR" == x"TheResult" ]; then echo "TheResult Happened" fi
(note invocation of test, then echo, as well as the interpreting needed to be done.)
which could be replaced by
if ( !strcmp(myvar, "TheResult") ) printf("TheResult Happened");
In C: no process launching, no having to do path searching. Lots of goodness.

Resources