Perl6 REPL usage - read-eval-print-loop

Is it possible to have (Rakudo) Perl6 execute some code before dropping you into the REPL? Like python does with "python -i ".
For instance, I want to load up some modules and maybe read a side file and build some data structures from that side file before dropping into the REPL and letting the user do the things they need to do on the data structure, using the REPL as a user interface.
This is similar but different than Start REPL with definitions loaded from file though answers to this question might satisfy that one. The basic case is that, at the end of execution of any program, instead of exiting, the interpreter leaves the user at the REPL. Aside from providing a nifty, built-in, Perl6-based user interface for interactive programs, it also provides a good tool from which to debug code that otherwise exits with an error.
edit:
Selecting Zoffix's solution as the correct (so far) one as it is the only one that satisfies all requirements as stated. Here's hoping this capability gets added to the compiler or language spec.

You can load modules with the -M switch.
$ perl6 -MJSON::Tiny
To exit type 'exit' or '^D'
> to-json Array.new: 1,2,3.Str
[ 1, 2, "3" ]
>
If you want to run other code, currently you have to put it into a module first.
$ mkdir lib
$ echo 'our $bar = 42' > lib/foo.pm6
$ perl6 -Ilib -Mfoo
To exit type 'exit' or '^D'
> $bar
42
>

I'd like to provide an answer that Zoffix gave on IRC. It satisfies the basic requirement but is far from pretty and it uses NQP for which there is no user support nor is the NQP API ( "nqp::*" calls ) guaranteed for the future and can change without warning.
replify 「
say 'Hello to your custom REPL! Type `say $a` to print the secret variable';
my $a = "The value is {rand}";
」;
sub replify (Str:D \pre-code = '') {
use nqp;
my %adverbs; # command line args like --MFoo
my \r := REPL.new: nqp::getcomp('perl6'), %adverbs;
my \enc := %adverbs<encoding>:v.Str;
enc && enc ne 'fixed_8' && $*IN.set-encoding: enc;
my $*CTXSAVE := r;
my $*MAIN_CTX;
pre-code and r.repl-eval: pre-code, $, :outer_ctx(nqp::getattr(r, REPL, '$!save_ctx')),
|%adverbs;
$*MAIN_CTX and nqp::bindattr(r, REPL, '$!save_ctx', $*MAIN_CTX);
r.repl-loop: :interactive, |%adverbs;
}

Related

Bash completion scripting - getting a "transparent proxy"-like behaviour

I am trying to write a simple Bash completion script for a program that runs its arguments as a command. A good example of this is kind of program is the prime-run script provided by the nvidia-prime package:
#!/bin/bash
__NV_PRIME_RENDER_OFFLOAD=1 __VK_LAYER_NV_optimus=NVIDIA_only __GLX_VENDOR_LIBRARY_NAME=nvidia "$#"
This script sets a few environment variables, which instructs the prime driver to use the Nvidia dGPU on a hybrid system. The first argument is treated as the command, and all trailing arguments are passed through. So for example you can run prime-run code . and VSCode will start in the current directory using the dGPU.
Therefore from a completion-script POV, what we want is to basically try to complete as if the prime-run token isn't there (hence "transparent proxy"-like behaviour). To give a rather contrived example:
> prime-run journalc<TAB>
(completes journalctl)
> prime-run journalctl --us<TAB>
(completes --user)
However I am finding this surprisingly difficult in Bash (not that I know how in other shells). So the question is simple: is it possible and if so how?
Ideas I've (hopelessly) had
The simple complete -A command prime-run: the first argument gets completed as a command as expected (let's call it foo), but the following arguments are also completed as commands rather than as arguments to foo
Use some combination of compgen and complete -p to invoke the completion function of foo, but AFAIK the completion function for all foo is locally defined and thus uncallable
TL;DR
bash-completion provides a function named _command_offset (permalink), which is exactly what I need.
# A meta-command completion function for commands like sudo(8), which need to
# first complete on a command, then complete according to that command's own
# completion definition.
Keep reading if you are interested in how I got here.
So I was daydreaming the other day, when it hit me - doesn't sudo basically have the exact same behaviour I want? So the task became simple - reverse engineer the completion script for sudo. Source available here: permalink.
Turns out, most of the code has to do with completing the various options, so it's safe to simply throw most of it out:
L 8-11, 50-52: Related to sudo's edit mode. Safe to ditch.
L 19-24, 27-39, 43-49: These complete sudo's options. Safe to ditch.
So we're left with this:
_sudo()
{
local cur prev words cword split
_init_completion -s || return
for ((i = 1; i <= cword; i++)); do
if [[ ${words[i]} != -* ]]; then
local PATH=$PATH:/sbin:/usr/sbin:/usr/local/sbin
local root_command=${words[i]}
_command_offset $i
return
fi
done
$split && return
} &&
complete -F _sudo sudo sudoedit
The for and if block are there to deal with sudo's options that precede the "guest command". Safe to ditch (after replacing all $i with 1).
The variable $split is only referenced in _init_completion (permalink), and it seems to be used for handling different argument styles (--foo=bar v.s. --foo bar). Same with the -s flag. Irrelevant.
Appending to $PATH and setting $root_command have to do with privilege escalation. Only relevant to sudo.
So after the dust has cleared, by process of elimination, I ended up with this simple chunk of code:
_my-script()
{
local cur prev words cword
_init_completion || return
_command_offset 1
} && complete -F _my-script my-script
Declaring these four local variables and calling _init_completion is standard for all completion scipts, so really it's as simple as one command. Of course someone had to write the massively-complex _command_offset function so lucky me I guess?
Anyways, thank you for reading the story of me messing around and hopefully this will be helpful to some other person in the future.

When to use set -e

I come across set -e a some time ago and I admit I love it.
Now, after some time I'm back to write some bash scripting.
My question is if there are some best practices when to use set -e and when not to use it (.e.g. in small/big scripts etc.) or should I rather use a pattern like cmd || exit 1 to track errors?
Yes, you should always use it. People make fun of Visual Basic all the time, saying it's not a real programming language, partly because of its “On Error Resume Next” statement. Yet that is the default in shell! set -e should have been the default. The potential for disaster is just too high.
In places where it's ok for a command to fail, you can use || true or its shortened form ||:, e.g.
grep Warning build.log ||:
In fact you should go a step further, and have
set -eu
set -o pipefail
at the top of every bash script.
-u makes it an error to reference a non-existent environment variable such as ${HSOTNAME}, at the cost of requiring some gymnastics with checking ${#} before you reference ${1}, ${2}, and so on.
pipefail makes things like misspeled-command | sed -e 's/^WARNING: //' raise errors.
If your script code checks for errors carefully and properly where necessary, and handles them in an appropriate manner, then you probably don't ever need or want to use set -e.
On the other hand if your script is a simple sequential list of commands to be run one after another, and if you want the script to terminate if any one of those fail, then sticking set -e at the top would be exactly what you would want to do to keep your script simple and uncluttered. A perfect example of this would be if you're creating a script to compile a set of sources and you want the compile to stop after the first file with errors is encountered.
More complex scripts can combine these methods since you can use set +e to turn its effect back off again and go back to explicit error checking.
Note that although set -e is supposed to cause the shell to exit IFF any untested command fails, it is wise to turn it off again when your code is doing its own error handling as there can easily be weird cases where a command will return a non-zero exit status that you're not expecting, and possibly even such cases that you might not catch in testing, and where sudden fatal termination of your script would leave something in a bad state. So, don't use set -e, or leave it turned on after using it briefly, unless you really know that you want it.
Note also that you can still define an error handler with trap ERR to do something on an error condition when set -e is in effect, as that will still be run before the shell exits.
You love it!?
For my self, I prefer in a wide, having in my .bashrc a line like this:
trap '/usr/games/fortune /usr/share/games/fortunes/bofh-excuses' ERR
( on debian: apt-get install fortunes-bofh-excuses :-)
But it's only my preference ;-)
More seriously
lastErr() {
local RC=$?
history 1 |
sed '
s/^ *[0-9]\+ *\(\(["'\'']\)\([^\2]*\)\2\|\([^"'\'' ]*\)\) */cmd: \"\3\4\", args: \"/;
s/$/", rc: '"$RC/"
}
trap "lastErr" ERR
Gna
bash: Gna : command not found
cmd: "Gna", args: "", rc: 127
Gna gna
cmd: "Gna", args: "gna", rc: 127
"Gna gna" foo
cmd: "Gna gna", args: "foo", rc: 127
Well, from there, you could:
trap "lastErr >>/tmp/myerrors" ERR
"Gna gna" foo
cat /tmp/myerrors
cmd: "Gna gna", args: "foo", rc: 1
Or better:
lastErr() {
local RC=$?
history 1 |
sed '
s/^ *[0-9]\+ *\(\(["'\'']\)\([^\2]*\)\2\|\([^"'\'' ]*\)\) */cmd: \"\3\4\", args: \"/;
s/$/", rc: '"$RC/
s/^/$(date +"%a %d %b %T ")/"
}
"Gna gna" foo
cat /tmp/myerrors
cmd: "Gna gna", args: "foo", rc: 1
Tue 20 Nov 18:29:18 cmd: "Gna gna", args: "foo", rc: 127
... You could even add other informations like $$, $PPID, $PWD or maybe your..
When this option is on, if a simple command fails for any of the reasons listed in Consequences of Shell Errors or returns an exit status value >0, and is not part of the compound list following a while, until, or if keyword, and is not a part of an AND or OR list, and is not a pipeline preceded by the ! reserved word, then the shell shall immediately
exit.

How to find or make a Bash utility script library? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
This post was edited and submitted for review 23 days ago and failed to reopen the post:
Opinion-based Update the question so it can be answered with facts and citations by editing this post.
Improve this question
Is there any commonly used (or unjustly uncommonly used) utility "library" of bash functions? Something like Apache commons-lang for Java. Bash is so ubiquitous that it seems oddly neglected in the area of extension libraries.
If not, how would I make one?
Libraries for bash are out there, but not common. One of the reasons that bash libraries are scarce is due to the limitation of functions. I believe these limitations are best explained on "Greg's Bash Wiki":
Functions. Bash's "functions" have several issues:
Code reusability: Bash functions don't return anything; they only produce output streams. Every reasonable method of capturing that stream and either assigning it to a variable or passing it as an argument requires a SubShell, which breaks all assignments to outer scopes. (See also BashFAQ/084 for tricks to retrieve results from a function.) Thus, libraries of reusable functions are not feasible, as you can't ask a function to store its results in a variable whose name is passed as an argument (except by performing eval backflips).
Scope: Bash has a simple system of local scope which roughly resembles "dynamic scope" (e.g. Javascript, elisp). Functions see the locals of their callers (like Python's "nonlocal" keyword), but can't access a caller's positional parameters (except through BASH_ARGV if extdebug is enabled). Reusable functions can't be guaranteed free of namespace collisions unless you resort to weird naming rules to make conflicts sufficiently unlikely. This is particularly a problem if implementing functions that expect to be acting upon variable names from frame n-3 which may have been overwritten by your reusable function at n-2. Ksh93 can use the more common lexical scope rules by declaring functions with the "function name { ... }" syntax (Bash can't, but supports this syntax anyway).
Closures: In Bash, functions themselves are always global (have "file scope"), so no closures. Function definitions may be nested, but these are not closures, though they look very much the same. Functions are not "passable" (first-class), and there are no anonymous functions (lambdas). In fact, nothing is "passable", especially not arrays. Bash uses strictly call-by-value semantics (magic alias hack excepted).
There are many more complications involving: subshells; exported functions; "function collapsing" (functions that define or redefine other functions or themselves); traps (and their inheritance); and the way functions interact with stdio. Don't bite the newbie for not understanding all this. Shell functions are totally f***ed.
Source: http://mywiki.wooledge.org/BashWeaknesses
One example of a shell "library" is /etc/rc.d/functions on Redhat based system. This file contains functions commonly used in sysV init script.
I see some good info and bad info here. Let me share what I know since bash is the primary language I use at work (and we build libraries..).
Google has a decent write up on bash scripts in general that I thought was a good read: https://google.github.io/styleguide/shell.xml.
Let me start by saying you should not think of a bash library as you do libraries in other languages.
There are certain practices that must be enforced to keep a library in bash simple, organized, and most importantly, reusable.
There is no concept of returning anything from a bash function except for strings that it prints and the function's exit status (0-255).
There are expected limitations here and a learning curve especially if you're accustomed to functions of higher-level languages.
It can be weird at first, and if you find yourself in a situation where strings just aren't cutting it, you'll want to leverage an external tool such as jq.
If jq (or something like it) is available, you can start having your functions print formatted output to be parsed & utilized as you would an object, array, etc.
Function Declarations
There are two ways to declare a function in bash.
One operates within your current shell, we'll call is Fx0.
And one spawns a subshell to operate in, we'll call that Fx1.
Here are examples of how they're declared:
Fx0(){ echo "Hello from $FUNCNAME"; }
Fx1()( echo "Hello from $FUNCNAME" )
These 2 functions perform the same operation - indeed.
However, there is a key difference here.
Fx1 cannot perform any action that alters the current shell.
That means modifying variables, changing shell options and declaring other functions.
The latter is what can be exploited to prevent name spacing issues that can easily creep up on you.
# Fx1 cannot change the variable from a subshell
Fx0(){ Fx=0; }
Fx1()( Fx=1 )
Fx=foo; Fx0; echo $Fx
# 0
Fx=foo; Fx1; echo $Fx
# foo
That being said, The only time you should use an "Fx0" kind of function is when you're wanting to redeclare something in the current shell.
Always use "Fx1" functions because they are safer and you you don't have to worry about the naming of any functions declared within it.
As you can see below, the innocent function is overwritten inside of Fx1, however, it remains unscathed after the execution of Fx1.
innocent_function()(
echo ":)"
)
Fx1()(
innocent_function()( true )
innocent_function
)
Fx1 #prints nothing, just returns true
innocent_function
# :)
This would have (likely) unintended consequences if you had used curly braces.
Examples of useful "Fx0" type functions would be specifically for changing the current shell, like so:
use_strict(){
set -eEu -o pipefail
}
enable_debug(){
set -Tx
}
disable_debug(){
set +Tx
}
Regarding Declarations
The use of global variables, or at least those expected to have a value, is bad practice all the way around.
As you're building a library in bash, you don't ever want a function to rely on an external variable already being set.
Anything the function needs should be supplied to it via the positional parameters.
This is the main problem I see in libraries other folks try to build in bash.
Even if I find something cool, I can't use it because I don't know the names of the variables I need to have set ahead of time.
It leads to digging through all of the code and ultimately just picking out the useful pieces for myself.
By far, the best functions to create for a library are extremely small and don't utilize named variables at all, even locally.
Take the following for example:
serviceClient()(
showUsage()(
echo "This should be a help page"
) >&2
isValidArg()(
test "$(type -t "$1")" = "function"
)
isRunning()(
nc -zw1 "$(getHostname)" "$(getPortNumber)"
) &>/dev/null
getHostname()(
echo localhost
)
getPortNumber()(
echo 80
)
getStatus()(
if isRunning
then echo OK
else echo DOWN
fi
)
getErrorCount()(
grep -c "ERROR" /var/log/apache2/error.log
)
printDetails()(
echo "Service status: $(getStatus)"
echo "Errors logged: $(getErrorCount)"
)
if isValidArg "$1"
then "$1"
else showUsage
fi
)
Typically, what you would see near the top is local hostname=localhost and local port_number=80 which is fine, but it is not necessary.
It is my opinion that these things should be functional-ized as you're building to prevent future pain when all of a sudden some logic needs to be introduced for getting a value, like: if isHttps; then echo 443; else echo 80; fi.
You don't want that kind of logic placed in your main function or else you'll quickly make it ugly and unmanageable.
Now, serviceClient has internal functions that get declared upon invocation which adds an unnoticeable amount of overhead to each run.
The benefit is now you can have service2Client with functions (or external functions) that are named the same as what serviceClient has with absolutely no conflicts.
Another important thing to keep in mind is that redirections can be applied to an entire function upon declaring it. see: isRunning or showUsage
This gets as close to object-oriented-ness as I think you should bother using bash.
. serviceClient.sh
serviceClient
# This should be a help page
if serviceClient isRunning
then serviceClient printDetails
fi
# Service status: OK
# Errors logged: 0
I hope this helps my fellow bash hackers out there.
Here's a list of "worthy of your time" bash libraries that I found after spending an hour or so googling.
https://github.com/mietek/bashmenot/
bashmenot is a library that is used by Halcyon and Haskell on Heroku. The above link points to a complete list of available functions with examples -- impressive quality, quantity and documentation.
http://marcomaggi.github.io/docs/mbfl.html
MBFL offers a set of modules implementing common operations and a script template. Pretty mature project and still active on github
https://github.com/javier-lopez/learn/blob/master/sh/lib
You need to look at the code for a brief description and examples. It has a few years of development in its back.
https://github.com/martinburger/bash-common-helpers
This has the fewer most basic functions. For documentation you also have to look at the code.
Variables declared inside a function but without the local keyword are global.
It's good practice to declare variables only needed inside a function with local to avoid conflicts with other functions and globally (see foo() below).
Bash function libraries need to always be 'sourced'. I prefer using the 'source' synonym instead of the more common dot(.) so I can see it better during debugging.
The following technique works in at least bash 3.00.16 and 4.1.5...
#!/bin/bash
#
# TECHNIQUES
#
source ./TECHNIQUES.source
echo
echo "Send user prompts inside a function to stderr..."
foo() {
echo " Function foo()..." >&2 # send user prompts to stderr
echo " Echoing 'this is my data'..." >&2 # send user prompts to stderr
echo "this is my data" # this will not be displayed yet
}
#
fnRESULT=$(foo) # prints: Function foo()...
echo " foo() returned '$fnRESULT'" # prints: foo() returned 'this is my data'
echo
echo "Passing global and local variables..."
#
GLOBALVAR="Reusing result of foo() which is '$fnRESULT'"
echo " Outside function: GLOBALVAR=$GLOBALVAR"
#
function fn()
{
local LOCALVAR="declared inside fn() with 'local' keyword is only visible in fn()"
GLOBALinFN="declared inside fn() without 'local' keyword is visible globally"
echo
echo " Inside function fn()..."
echo " GLOBALVAR=$GLOBALVAR"
echo " LOCALVAR=$LOCALVAR"
echo " GLOBALinFN=$GLOBALinFN"
}
# call fn()...
fn
# call fnX()...
fnX
echo
echo " Outside function..."
echo " GLOBALVAR=$GLOBALVAR"
echo
echo " LOCALVAR=$LOCALVAR"
echo " GLOBALinFN=$GLOBALinFN"
echo
echo " LOCALVARx=$LOCALVARx"
echo " GLOBALinFNx=$GLOBALinFNx"
echo
The sourced function library is represented by...
#!/bin/bash
#
# TECHNIQUES.source
#
function fnX()
{
local LOCALVARx="declared inside fnX() with 'local' keyword is only visible in fnX()"
GLOBALinFNx="declared inside fnX() without 'local' keyword is visible globally"
echo
echo " Inside function fnX()..."
echo " GLOBALVAR=$GLOBALVAR"
echo " LOCALVARx=$LOCALVARx"
echo " GLOBALinFNx=$GLOBALinFNx"
}
Running TECHNIQUES produces the following output...
Send user prompts inside a function to stderr...
Function foo()...
Echoing 'this is my data'...
foo() returned 'this is my data'
Passing global and local variables...
Outside function: GLOBALVAR=Reusing result of foo() which is 'this is my data'
Inside function fn()...
GLOBALVAR=Reusing result of foo() which is 'this is my data'
LOCALVAR=declared inside fn() with 'local' keyword is only visible in fn()
GLOBALinFN=declared inside fn() without 'local' keyword is visible globally
Inside function fnX()...
GLOBALVAR=Reusing result of foo() which is 'this is my data'
LOCALVARx=declared inside fnX() with 'local' keyword is only visible in fnX()
GLOBALinFNx=declared inside fnX() without 'local' keyword is visible globally
Outside function...
GLOBALVAR=Reusing result of foo() which is 'this is my data'
LOCALVAR=
GLOBALinFN=declared inside fn() without 'local' keyword is visible globally
LOCALVARx=
GLOBALinFNx=declared inside fnX() without 'local' keyword is visible globally
I found a good but old article here that gave a comprehensive list of utility libraries:
http://dberkholz.com/2011/04/07/bash-shell-scripting-libraries/
I can tell you that the lack of available function libraries has nothing to do with Bash's limitations, but rather how Bash is used. Bash is a quick and dirty language made for automation, not development, so the need for a library is rare. Then, you start to have a fine line between a function that needs to be shared, and converting the function into a full fledged script to be called. This is from a coding perspective, to be loaded by a shell is another matter, but normally runs on personal taste, not need. So... again a lack of shared libraries.
Here are a few functions I use regularly
In my .bashrc
cd () {
local pwd="${PWD}/"; # we need a slash at the end so we can check for it, too
if [[ "$1" == "-e" ]]; then
shift
# start from the end
[[ "$2" ]] && builtin cd "${pwd%/$1/*}/${2:-$1}/${pwd##*/$1/}" || builtin cd "$#"
else
# start from the beginning
if [[ "$2" ]]; then
builtin cd "${pwd/$1/$2}"
pwd
else
builtin cd "$#"
fi
fi
}
And a version of a log()/err() exists in a function library at work for coders-- mainly so we all use the same style.
log() {
echo -e "$(date +%m.%d_%H:%M) $#"| tee -a $OUTPUT_LOG
}
err() {
echo -e "$(date +%m.%d_%H:%M) $#" |tee -a $OUTPUT_LOG
}
As you can see, the above utilities we use here, are not that exciting to share. I have another library to do tricks around bash limitations, which I think is the best use for them and I recommend creating your own.

What is the purpose of the : (colon) GNU Bash builtin?

What is the purpose of a command that does nothing, being little more than a comment leader, but is actually a shell builtin in and of itself?
It's slower than inserting a comment into your scripts by about 40% per call, which probably varies greatly depending on the size of the comment. The only possible reasons I can see for it are these:
# poor man's delay function
for ((x=0;x<100000;++x)) ; do : ; done
# inserting comments into string of commands
command ; command ; : we need a comment in here for some reason ; command
# an alias for `true'
while : ; do command ; done
I guess what I'm really looking for is what historical application it might have had.
Historically, Bourne shells didn't have true and false as built-in commands. true was instead simply aliased to :, and false to something like let 0.
: is slightly better than true for portability to ancient Bourne-derived shells. As a simple example, consider having neither the ! pipeline operator nor the || list operator (as was the case for some ancient Bourne shells). This leaves the else clause of the if statement as the only means for branching based on exit status:
if command; then :; else ...; fi
Since if requires a non-empty then clause and comments don't count as non-empty, : serves as a no-op.
Nowadays (that is: in a modern context) you can usually use either : or true. Both are specified by POSIX, and some find true easier to read. However there is one interesting difference: : is a so-called POSIX special built-in, whereas true is a regular built-in.
Special built-ins are required to be built into the shell; Regular built-ins are only "typically" built in, but it isn't strictly guaranteed. There usually shouldn't be a regular program named : with the function of true in PATH of most systems.
Probably the most crucial difference is that with special built-ins, any variable set by the built-in - even in the environment during simple command evaluation - persists after the command completes, as demonstrated here using ksh93:
$ unset x; ( x=hi :; echo "$x" )
hi
$ ( x=hi true; echo "$x" )
$
Note that Zsh ignores this requirement, as does GNU Bash except when operating in POSIX compatibility mode, but all other major "POSIX sh derived" shells observe this including dash, ksh93, and mksh.
Another difference is that regular built-ins must be compatible with exec - demonstrated here using Bash:
$ ( exec : )
-bash: exec: :: not found
$ ( exec true )
$
POSIX also explicitly notes that : may be faster than true, though this is of course an implementation-specific detail.
I use it to easily enable/disable variable commands:
#!/bin/bash
if [[ "$VERBOSE" == "" || "$VERBOSE" == "0" ]]; then
vecho=":" # no "verbose echo"
else
vecho=echo # enable "verbose echo"
fi
$vecho "Verbose echo is ON"
Thus
$ ./vecho
$ VERBOSE=1 ./vecho
Verbose echo is ON
This makes for a clean script. This cannot be done with '#'.
Also,
: >afile
is one of the simplest ways to guarantee that 'afile' exists but is 0 length.
A useful application for : is if you're only interested in using parameter expansions for their side-effects rather than actually passing their result to a command.
In that case, you use the parameter expansion as an argument to either : or false depending upon whether you want an exit status of 0 or 1. An example might be
: "${var:=$1}"
Since : is a builtin, it should be pretty fast.
: can also be for block comment (similar to /* */ in C language). For example, if you want to skip a block of code in your script, you can do this:
: << 'SKIP'
your code block here
SKIP
Two more uses not mentioned in other answers:
Logging
Take this example script:
set -x
: Logging message here
example_command
The first line, set -x, makes the shell print out the command before running it. It's quite a useful construct. The downside is that the usual echo Log message type of statement now prints the message twice. The colon method gets round that. Note that you'll still have to escape special characters just like you would for echo.
Cron job titles
I've seen it being used in cron jobs, like this:
45 10 * * * : Backup for database ; /opt/backup.sh
This is a cron job that runs the script /opt/backup.sh every day at 10:45. The advantage of this technique is that it makes for better looking email subjects when the /opt/backup.sh prints some output.
It's similar to pass in Python.
One use would be to stub out a function until it gets written:
future_function () { :; }
If you'd like to truncate a file to zero bytes, useful for clearing logs, try this:
:> file.log
You could use it in conjunction with backticks (``) to execute a command without displaying its output, like this:
: `some_command`
Of course you could just do some_command > /dev/null, but the :-version is somewhat shorter.
That being said I wouldn't recommend actually doing that as it would just confuse people. It just came to mind as a possible use-case.
It's also useful for polyglot programs:
#!/usr/bin/env sh
':' //; exec "$(command -v node)" "$0" "$#"
~function(){ ... }
This is now both an executable shell-script and a JavaScript program: meaning ./filename.js, sh filename.js, and node filename.js all work.
(Definitely a little bit of a strange usage, but effective nonetheless.)
Some explication, as requested:
Shell-scripts are evaluated line-by-line; and the exec command, when run, terminates the shell and replaces it's process with the resultant command. This means that to the shell, the program looks like this:
#!/usr/bin/env sh
':' //; exec "$(command -v node)" "$0" "$#"
As long as no parameter expansion or aliasing is occurring in the word, any word in a shell-script can be wrapped in quotes without changing its' meaning; this means that ':' is equivalent to : (we've only wrapped it in quotes here to achieve the JavaScript semantics described below)
... and as described above, the first command on the first line is a no-op (it translates to : //, or if you prefer to quote the words, ':' '//'. Notice that the // carries no special meaning here, as it does in JavaScript; it's just a meaningless word that's being thrown away.)
Finally, the second command on the first line (after the semicolon), is the real meat of the program: it's the exec call which replaces the shell-script being invoked, with a Node.js process invoked to evaluate the rest of the script.
Meanwhile, the first line, in JavaScript, parses as a string-literal (':'), and then a comment, which is deleted; thus, to JavaScript, the program looks like this:
':'
~function(){ ... }
Since the string-literal is on a line by itself, it is a no-op statement, and is thus stripped from the program; that means that the entire line is removed, leaving only your program-code (in this example, the function(){ ... } body.)
Self-documenting functions
You can also use : to embed documentation in a function.
Assume you have a library script mylib.sh, providing a variety of functions. You could either source the library (. mylib.sh) and call the functions directly after that (lib_function1 arg1 arg2), or avoid cluttering your namespace and invoke the library with a function argument (mylib.sh lib_function1 arg1 arg2).
Wouldn't it be nice if you could also type mylib.sh --help and get a list of available functions and their usage, without having to manually maintain the function list in the help text?
#!/bin/bash
# all "public" functions must start with this prefix
LIB_PREFIX='lib_'
# "public" library functions
lib_function1() {
: This function does something complicated with two arguments.
:
: Parameters:
: ' arg1 - first argument ($1)'
: ' arg2 - second argument'
:
: Result:
: " it's complicated"
# actual function code starts here
}
lib_function2() {
: Function documentation
# function code here
}
# help function
--help() {
echo MyLib v0.0.1
echo
echo Usage: mylib.sh [function_name [args]]
echo
echo Available functions:
declare -f | sed -n -e '/^'$LIB_PREFIX'/,/^}$/{/\(^'$LIB_PREFIX'\)\|\(^[ \t]*:\)/{
s/^\('$LIB_PREFIX'.*\) ()/\n=== \1 ===/;s/^[ \t]*: \?['\''"]\?/ /;s/['\''"]\?;\?$//;p}}'
}
# main code
if [ "${BASH_SOURCE[0]}" = "${0}" ]; then
# the script was executed instead of sourced
# invoke requested function or display help
if [ "$(type -t - "$1" 2>/dev/null)" = function ]; then
"$#"
else
--help
fi
fi
A few comments about the code:
All "public" functions have the same prefix. Only these are meant to be invoked by the user, and to be listed in the help text.
The self-documenting feature relies on the previous point, and uses declare -f to enumerate all available functions, then filters them through sed to only display functions with the appropriate prefix.
It is a good idea to enclose the documentation in single quotes, to prevent undesired expansion and whitespace removal. You'll also need to be careful when using apostrophes/quotes in the text.
You could write code to internalize the library prefix, i.e. the user only has to type mylib.sh function1 and it gets translated internally to lib_function1. This is an exercise left to the reader.
The help function is named "--help". This is a convenient (i.e. lazy) approach that uses the library invoke mechanism to display the help itself, without having to code an extra check for $1. At the same time, it will clutter your namespace if you source the library. If you don't like that, you can either change the name to something like lib_help or actually check the args for --help in the main code and invoke the help function manually.
I saw this usage in a script and thought it was a good substitute for invoking basename within a script.
oldIFS=$IFS
IFS=/
for basetool in $0 ; do : ; done
IFS=$oldIFS
...
this is a replacement for the code: basetool=$(basename $0)
Another way, not yet mentioned here is the initialisation of parameters in infinite while-loops. Below is not the cleanest example, but it serves it's purpose.
#!/usr/bin/env bash
[ "$1" ] && foo=0 && bar="baz"
while : "${foo=2}" "${bar:=qux}"; do
echo "$foo"
(( foo == 3 )) && echo "$bar" && break
(( foo=foo+1 ))
done

Bash and Test-Driven Development

When writing more than a trivial script in bash, I often wonder how to make the code testable.
It is typically hard to write tests for bash code, due to the fact that it is low on functions that take a value and return a value, and high on functions that check and set some aspect in the environment, modify the file-system, invoke a program, etc. - functions that depend on the environment or have side effects. Thus the setup and test code become much more complicated than the code they test.
For example, consider a simple function to test:
function add_to_file() {
local f=$1
cat >> $f
sort -u $f -o $f
}
Test code for this function might consist of:
add_to_file.before:
foo
bar
baz
add_to_file.after:
bar
baz
foo
qux
And test code:
function test_add_to_file() {
cp add_to_file.{before,tmp}
add_to_file add_to_file.tmp
cmp add_to_file.{tmp,after} && echo pass || echo fail
rm add_to_file.tmp
}
Here 5 lines of code are tested by 6 lines of test code and 7 lines of data.
Now consider a slightly more complicated case:
function distribute() {
local file=$1 ; shift
local hosts=( "$#" )
for host in "${hosts[#]}" ; do
rsync -ae ssh $file $host:$file
done
}
I can't even say how to start write a test for that...
So, is there a good way to do TDD in bash scripts, or should I give up and put my efforts elsewhere?
So here is what I learned:
There are some testing frameworks written in bash and for bash,
however...
It is not so much that Bash is not suitable for TDD (although some
other languages come to mind that are a better fit), but the
typical tasks that Bash is used for (Installation, System
configuration), that are hard to write tests for, and in
particularly hard to setup the test.
The poor data structure support in Bash makes it hard to separate
logic from side-effect, and indeed there is typically little logic
in Bash scripts. That makes it hard to break scripts into
testable chunks. There are some functions that can be tested, but
that is the exception, not the rule.
Function are a good thing (tm), but they can only go so far.
Nested functions can be even better, but they are also limited.
At the end of the day, with major effort some coverage can be
obtained, but it will test the less interesting part of the code,
and will keep the bulk of the testing as a good (or bad) old manual
testing.
Meta: I decided to answer (and accept) my own question, because I was unable to choose between Sinan Ünür's (voted up) and mouviciel's (voted up) answers that where equally useful and insightful. I want to note Stefano Borini's answer, that although not impressed me initially, I learned to appreciate it over time. Also his design patterns or best practices for shell scripts answer (voted up) referred above was useful.
If you are writing code at the same time with tests, try to make it high on functions that don't use anything besides their parameters and don't modify environment. That is, if your function might as well run in a subshell, then it will be easy to test. It takes some arguments and outputs something to stdout, or to a file, or maybe it does something on the system, but caller does not feel side effects.
Yes, you will end up with big chain of functions passing down some WORKING_DIR variable that might as well be global, but this is minor inconvenience comparing to the task of tracking what does each function read and modify. Enabling unit tests is just a free bonus too.
Try to minimize cases where you need output. A little subshell abuse will go long way to keeping things nicely separated (at the expense of performance).
Instead of linear structure, where functions are called, set some environment, then other ones are called, all pretty much on one level, try to go for deep call tree with minimum data going back. Returning stuff in bash is inconvenient if you adopt self-imposed abstinence from global vars...
From an implementation point of view, I suggest shUnit2 or bats.
From a practical point of view, I suggest not to give up. I use TDD on bash scripts and I confirm that it is worth the effort.
Of course, I get about twice as many lines of test than of code but with complex scripts, efforts in testing are a good investment. This is true in particular when your client changes its mind near the end of the project and modifies some requirements. Having a regression test suite is a big aid in changing complex bash code.
If you code a bash program large enough to require TDD, you are using the wrong language.
I suggest you to read my previous post on best practices in bash programming, you will probably find something useful to make your bash program testable, but my statement above stays.
Design patterns or best practices for shell scripts
Writing what Meszaros calls consumer tests is hard in any language. Another approach is to verify the behavior of commands such as rsync manually, then write unit tests to prove specific functionality without hitting the network. In this slightly-modified example, $run is used to print the side-effects if the script is run with the keyword "test"
function distribute {
local file=$1 ; shift
for host in $# ; do
$run rsync -ae ssh $file $host:$file
done
}
if [[ $1 == "test" ]]; then
run="echo"
else
distribute schedule.txt $*
exit 0
fi
#
# Built-in self-tests
#
output=$(mktemp)
expected=$(mktemp)
set -e
trap "rm $got $expected" EXIT
distribute schedule.txt login1 login2 > $output
cat << EOF > $expected
rsync -ae ssh schedule.txt login1:schedule.txt
rsync -ae ssh schedule.txt login2:schedule.txt
EOF
diff $output $expected
echo -n '.'
echo; echo "PASS"
You might want to take a look at cucumber/aruba. Did quite a nice job for me.
Additionally, you can stub just about everything you want by doing something like this:
#
# code.sh
#
some_function_calling_some_external_binary()
{
if ! external_binary action_1; then
# ...
fi
if ! external_binary action_2; then
# ...
fi
}
#
# test.sh
#
# now for the test, simply stub your external binary:
external_binary()
{
if [ "$#" = "action_1" ]; then
# stub action_1
elif [ "$#" = "action_2" ]; then
# stub action_2
else
external_binary $#
fi
}
The advanced bash scripting guide has an example of an assert function but here is a simpler and more flexible assert function - just use eval of $* to test any condition.
assert() {
if ! eval $* ; then
echo
echo "===== Assertion failed: \"$*\" ====="
echo "File \"$0\", line:$LINENO line:${BASH_LINENO[*]}"
echo line:$(caller 0)
exit 99
fi
}
# e.g. USAGE:
assert [[ $r == 42 ]]
assert "((r==42))"
BASH_LINENO and caller bash builtin are bash shell specific.
take a look at Outthentic framework - it is designed to create scenarios which runs any Bash code and then analyze the stdout using formal DSL, it's pretty easy to build any Tdd/blackbox tests suite upon this tool.

Resources