How to get the newly-installed version within a Debian postinst script? - shell

Per the Debian Policy Manual, my postinst script is getting called at upgrade and configure time, as "postinst configure old-version", where old-version is the previously installed version (possibly null). I want to determine new-version, i.e. the version that is currently being configured (upgraded to).
The environment variable $DPKG_MAINTSCRIPT_PACKAGE contains the package name; there does not seem to be an equivalent _VERSION field. /var/lib/dpkg/status gets updated AFTER postinst runs, so I can't seem to parse it out of there, either.
Any ideas?

This is the best method I have found to resolve this issue is to use a place-holder variable in your .postinst (or other control files):
case "$1" in
configure)
new_version="__NEW_VERSION__"
# Do something interesting interesting with $new_version...
;;
abort-upgrade|abort-remove|abort-deconfigure)
# Do nothing
;;
*)
echo "Unrecognized postinst argument '$1'"
;;
esac
Then in debian/rules, replace the placeholder variable with the proper version number at build time:
# Must not depend on anything. This is to be called by
# binary-arch/binary-indep in another 'make' thread.
binary-common:
dh_testdir
dh_testroot
dh_lintian
< ... snip ... >
# Replace __NEW_VERSION__ with the actual new version in any control files
for pkg in $$(dh_listpackages -i); do \
sed -i -e 's/__NEW_VERSION__/$(shell $(SHELL) debian/gen_deb_version)/' debian/$$pkg/DEBIAN/*; \
done
# Note dh_builddeb *must* come after the above code
dh_builddeb
The resulting .postinst snippet, found in debian/<package-name>/DEBIAN/postinst, will look like:
case "$1" in
configure)
new_version="1.2.3"
# Do something interesting interesting with $new_version...
;;
abort-upgrade|abort-remove|abort-deconfigure)
# Do nothing
;;
*)
echo "Unrecognized postinst argument '$1'"
;;
esac

VERSION=$(zless /usr/share/doc/$DPKG_MAINTSCRIPT_PACKAGE/changelog* \
| dpkg-parsechangelog -l- -SVersion')
Advantages over other solutions here:
Works regardless of whether changelog is compressed or not
Uses dpkg's changelog parser instead of regular expressions, awk, etc.

By the time postinst is run, all the package files have been installed and dpkg's data base has been updated, so you can get the just installed version with:
dpkg-query --show --showformat='${Version}' packagename

I use the following somewhat dirty command in the postinst script:
NewVersion=$(zcat /usr/share/doc/$DPKG_MAINTSCRIPT_PACKAGE/changelog.gz | \
head -1 | perl -ne '$_=~ /.*\((.*)\).*/; print $1;')

Add the following to the debian/rules:
override_dh_installdeb:
dh_installdeb
for pkg in $$(dh_listpackages -i); do \
sed -i -e 's/__DEB_VERSION__/$(DEB_VERSION)/' debian/$$pkg/DEBIAN/*; \
done
It will replace any occurrence of __DEB_VERSION__ in your debian scripts with the version number.

Why can't you hard-code the version into the postinst script at packaging time?

Try this:
VERSION=`dpkg -s $DPKG_MAINTSCRIPT_PACKAGE | sed -n 's/^Version: //p'`

Related

uniary operator expected on first run of bash script - how do I work around this?

#!/bin/bash
#script complains if our lastdownfile dosen't exist.
latestripcord=`elinks -dump https://cancel.fm/ripcord/ -no-numbering| grep Ripcord_Win | cut -c 5-`
#standard elinks dump scripty goodness. the cut -c 5- is to strip the leading zero
lastdownloadripcord=`cat lastdownload`
#we have stored the last download in a text file.
version=`echo $lastdownloadripcord | cut -d _ -f3| cut -d . -f 1-3`
#maybe we can do more with the version?
if [ $latestripcord == $lastdownloadripcord ];then
echo "latest version $version installed"
#to do - strip out and store the latest version number somewhere for use in the script.
else
echo $latestripcord|tee lastdownload| curl -sS $latestripcord > ripcord.zip
unzip ripcord.zip -d ./ripcord
#if we have a new version, update last downloaded version and download latest. Unzip to its own dir
fi
Essentially - I'm dumping out a download link for an application that's shipped as a zip file, storing the link to compare later so I know a new version is out, and if so, downloading the latest version.
I've tested this with a "fake" new version, by editing out the last download file. However on first run, there's no lastdownload file which results in 2 error messages.
cat: lastdownload: No such file or directory
./autoripcord.sh: line 9: [: https://cancel.fm/dl/Ripcord_Win_0.4.24.zip: unary operator expecte
The former is because there's no lastdownload file, the latter because there's nothing for the statement on line 9 to compare to. I'm fine on subsequent runs since the file is generated and there's a value to compare.
What would be the 'right' way to handle this? In theory, I could create a dummy file (but that solves the first, not the second error). In theory I can add another loop to check for the existence of the temporary storage file but that feels like overkill.
Replace
if [ $latestripcord == $lastdownloadripcord ];then
With
if [[ $latestripcord == $lastdownloadripcord ]];then

Generating parameters for `docker run` through command expansion from .env files

I'm facing some problems to pass some environment parameters to docker run in a relatively generic way.
Our first iteration was to load a .env file into the environment via these lines:
set -o allexport;
. "${PROJECT_DIR}/.env";
set +o allexport;
And then manually typing the --env VARNAME=$VARNAME as options for the docker run command. But this can be quite annoying when you have dozens of variables.
Then we tried to just pass the file, with --env-file .env, and it seems to work, but it doesn't, because it does not play well with quotes around the variable values.
Here is where I started doing crazy/ugly things. The basic idea was to do something like:
set_docker_parameters()
{
grep -v '^$' "${PROJECT_DIR}/.env" | while IFS= read -r LINE; do
printf " -e %s" "${LINE}"
done
}
docker run $(set_docker_parameters) --rm image:label command
Where the parsed lines are like VARIABLE="value", VARIABLE='value', or VARIABLE=value. Blank lines are discarded by the piped grep.
But docker run complains all the time about not being called properly. When I expand the result of set_docker_parameters I get what I expected, and when I copy its result and replace $(set_docker_parameters), then docker run works as expected too, flawless.
Any idea on what I'm doing wrong here?
Thank you very much!
P.S.: I'm trying to make my script 100% POSIX-compatible, so I'll prefer any solution that does not rely on Bash-specific features.
Based on the comments of #jordanm I devised the following solution:
docker_run_wrapper()
{
# That's not ideal, but in any case it's not directly related to the question.
cmd=$1
set --; # Unset all positional arguments ($# will be emptied)
# We don't have arrays (we want to be POSIX compatible), so we'll
# use $# as a sort of substitute, appending new values to it.
grep -v '^$' "${PROJECT_DIR}/.env" | while IFS= read -r LINE; do
set -- "$#" "--env";
set -- "$#" "${LINE}";
done
# We use $# in a clearly non-standard way, just to expand the values
# coming from the .env file.
docker run "$#" "image:label" /bin/sh -c "${cmd}";
}
Then again, this is not the code I wrote for my particular use case, but a simplification that shows the basic idea. If you can rely on having Bash, then it could be much cleaner, by not overloading $# and using arrays.

Is there a command line tool in Golang to only check syntax of my source code?

For example there is a -c option in Ruby that checks syntax before running a code:
C:\>ruby --help
Usage: ruby [switches] [--] [programfile] [arguments]
-c check syntax only
C:\>ruby -c C:\foo\ruby\my_source_code.rb
Syntax OK
Is there a similar functionality in Go?
P.S. An example from Ruby is only because I know it in Ruby. Not because of trolling or something.
You can use gofmt to check for syntax errors without actually building the project.
gofmt -e my_file.go > /dev/null
You can later use $? bash variable, return code 0 implies success, 2 means syntax check. /dev/null will eat the code, but the errors go to stderr
The -e option is defined as:
report all errors (not just the first 10 on different lines)
gofmt --help
usage: gofmt [flags] [path ...]
-comments=true: print comments
-cpuprofile="": write cpu profile to this file
-d=false: display diffs instead of rewriting files
-e=false: report all errors (not just the first 10 on different lines)
-l=false: list files whose formatting differs from gofmt's
-r="": rewrite rule (e.g., 'a[b:len(a)] -> a[b:]')
-s=false: simplify code
-tabs=true: indent with tabs
-tabwidth=8: tab width
-w=false: write result to (source) file instead of stdout
Is there much point only checking the syntax? The Go compiler is so fast you might as well compile everything too.
In this sense, the underlying mental model is quite different from that of Ruby.
Just use go build or go install. http://golang.org/cmd/go/
Ruby is an interpreted language so a command that checks the syntax might make sense (since I assume you could potentially run the program even if there are syntax errors at some point).
Go on the other hand is a compiled language so it cannot be run at all if there are syntax errors. As such, the simplest way to know if there are errors is to build the program with go build.
In agreement with #Rick-777, I would strongly recommend using go build. It performs additional checks that go fmt does not (ex: missing or unnecessary imports).
If you are worried about creating a binary in your source directory, you could always go build -o /dev/null to discard the output, which essentially reduces go build to a test that the code will build. That gets you syntax checking, among other things.
EDIT: note that go build doesn't generate a binary when building non-main packages, so you don't need the -o option.
Updated answer for those that don't want to settle for only checking with gofmt:
You can substitute gotype for go build to get just the front-end of the go compiler that validates both syntax and structure:
https://godoc.org/golang.org/x/tools/cmd/gotype
It's comparative in speed to gofmt, but returns all the errors you'd get from go build.
The one caveat is that it appears to require other packages to be go installed, otherwise it doesn't find them. Not sure why this is.
golang syntax checker
Place the below code in a file called gochk in a bin dir and chmod 0755.
Then run gochk -- help
#!/bin/bash
#
# gochk v1.0 2017-03-15 - golang syntax checker - ekerner#ekerner.com
# see --help
# usage and version
if \
test "$1" = "-?" || \
test "$1" = "-h" || \
test "$1" = "--help" || \
test "$1" = "-v" || \
test "$1" = "--version"
then
echo "gochk v1.0 2017-03-15 - golang syntax checker - ekerner#ekerner.com"; echo
echo "Usage:"
echo " $0 -?|-h|--help|-v|--version # show this"
echo " $0 [ file1.go [ file2.go . . . ] ] # syntax check"
echo "If no args passed then *.go will be checked"; echo
echo "Examples:"
echo " $0 --help # show this"
echo " $0 # syntax check *.go"
echo " $0 cmd/my-app/main.go handlers/*.go # syntax check list"; echo
echo "Authors:"
echo " http://stackoverflow.com/users/233060/ekerner"
echo " http://stackoverflow.com/users/2437417/crazy-train"
exit
fi
# default to .go files in cwd
gos=$#
if test $# -eq 0; then
gos=$(ls -1 *.go 2>/dev/null)
if test ${#gos[#]} -eq 0; then
exit
fi
fi
# test each one using gofmt
# credit to Crazy Train at
# http://stackoverflow.com/questions/16863014/is-there-a-command-line-tool-in-golang-to-only-check-syntax-of-my-source-code
#
for go in $gos; do
gofmt -e "$go" >/dev/null
done

The -q option of bsdtar

I ran across the following code in a bash script.
# See if bsdtar can recognize the file
if bsdtar -tf "$file" -q '*' &>/dev/null; then
cmd="bsdtar"
else
continue
what did the '-q' option mean? I did not find any information in the help message of bsdtar command.
Thank you!
From the bsdtar man page:
-q (--fast-read)
(x and t mode only) Extract or list only the first archive entry
that matches each pattern or filename operand. Exit as soon as
each specified pattern or filename has been matched. By default,
the archive is always read to the very end, since there can be
multiple entries with the same name and, by convention, later
entries overwrite earlier entries. This option is provided as a
performance optimization.

Detect whether script is being run by Bash

I have put together the following to detect if a script is being run by Bash or not:
################################################################################
# Checks whether execution is going through Bash, aborting if it isn't. TinoSino
current_shell="$(
ps `# Report a snapshot of the current processes` \
-p $$ `# select by PID` \
-o comm `# output column: Executable namename` \
|\
paste `# Merge lines of files ` \
-s `# paste one file at a time instead of in parallel` \
- `# into standard output` \
|\
awk `# Pick from list of tokens` \
'{ print $NF }' `# print only last field of the command output`
)"
current_shell="${current_shell#-}" # Remove starting '-' character if present
if [ ! "${current_shell}" = 'bash' ]; then
echo "This script is meant to be executed by the Bash shell but it isn't."
echo 'Continuing from another shell may lead to unpredictable results.'
echo 'Execution will be aborted... now.'
return 0
fi
unset current_shell
################################################################################
I am not asking you specifically to code review it because you would be sending me to CodeReview; my question is:
how would you go about testing whether this "execution guard" put at the top of my script does indeed do its job reliably?
I am thinking about installing Virtual Machines and on each machine to install things like zsh, csh, etc. But it looks way too time-consuming to me. Better ways to do this?
Should you spot an immediate mistake do point it out to me though please. Just glaring bugs waving their legs waiting to be squashed should be squashed, I think.
This is better written as
if [ -z "$BASH_VERSION" ]
then
echo "Please run me in bash"
exit 1
fi
As for testing, get a list of non-bash shells from /etc/shells, and just run the script with each of them verifying that you get your error message.
getshver (recommended)
whatshell
which_interpreter (silly)
I would only recommend rolling your own if it isn't critical to "guarantee" correct results. I don't think such a guarantee is even possible, but most of the time you're at most targeting a few shells and only care about modern versions. Very few people should even care about version detection. Writing portable code while going outside of POSIX requires knowing what you're doing.
Don't bother detecting the shell just to abort. If people want to shoot themselves in the foot by ignoring the shebang that's their problem.

Resources