How to test a Makefile for missing dependencies? - bash

Is there a way to test for missing dependencies that shows up when compiling a project with multiple jobs (-jN where N > 1)?
I often encounter packages, mostly open source, where the build process works fine as long as I use -j1, or -jN where N is a relatively low value such as 4 or 8 but if I used higher values likes 48, a little uncommon, it starts to fail due to missing dependencies.
I attempted to build myself a bash script that would, given a target, figure out all the dependencies and try to build explicitly each of those dependency with -j1 in order to validate that none are missing dependencies on their own. It appears to work with small / medium package but fails on more important one like uClibc for example.
I am sharing my script in here, as some people may understand better what I mean by reading code. I also hope that a more robust solution exists and could be shared back.
#!/bin/bash
TARGETS=$*
echo "TARGETS=$TARGETS"
for target in $TARGETS
do
MAKE="make"
RULE=`make -j1 -n -p | grep "^$target:"`
if [ -z "$RULE" ]; then
continue
fi
NEWTARGETS=${RULE#* }
if [ -z "$NEWTARGETS" ]; then
continue
fi
if [ "${NEWTARGETS}" = "${RULE}" ]; then
# leaf target, we do not want to test.
continue
fi
echo "RULE=$RULE"
# echo "NEWTARGETS=$NEWTARGETS"
$0 $NEWTARGETS
if [ $? -ne 0 ]; then
exit 1
fi
echo "Testing target $target"
make clean && make -j1 $target
if [ $? -ne 0 ]; then
echo "Make parallel will fail with target $target"
exit 1
fi
done

I'm not aware of any open source solution, but this is exactly the problem that ElectricAccelerator, a high-performance implementation of GNU make, was created to solve. It will execute the build in parallel and dynamically detect and correct for missing dependencies, so that the build output is the same as if it had been run serially. It can produce an annotated build log which includes details about the missing dependencies. For example, this simple makefile has an undeclared dependency between abc and def:
all: abc def
abc:
echo PASS > abc
def:
cat abc
Run this with emake instead of gmake and enable --emake-annodetail=history, and the resulting annotation file includes this:
<job id="Jf42015c0" thread="f4bfeb40" start="5" end="6" type="rule" name="def" file="Makefile" line="6" neededby="Jf42015f8">
<command line="7">
<argv>cat abc</argv>
<output>cat abc
</output>
<output src="prog">PASS
</output>
</command>
<depList>
<dep writejob="Jf4201588" file="/tmp/foo/abc"/>
</depList>
<timing invoked="0.356803" completed="0.362634" node="chester-1"/>
</job>
In particular the <depList> section shows that this job, Jf42015c0 (in other words, def), depends on job Jf4201588, because the latter modified the file /tmp/foo/abc.
You can give it a try for free with ElectricAccelerator Huddle.
(Disclaimer: I'm the architect of ElectricAccelerator)

Related

Record shellcheck findings in Jenkins

I'm looking for a way to record violation findings of shellcheck in my Jenkins Pipeline script. I was not able to find something so far. For other tools (Java, Python), I'm using Warnings Next Generation, but it does not seem to support shellcheck, yet. I'd like to have the violations visualized within my Jenkins Job dashboard. Does anyone have experience with that? Or perhaps a ready to use custom tool for Warnings NG?
I did find a feasible solution myself. Like suggested in the comments, spellcheck offers checkstyle format, which can be parsed and visualized with Warnings NG. The following Pipeline stage definition works fine.
stage('Analyze') {
steps {
catchError(buildResult: 'SUCCESS') {
sh """#!/bin/bash
# The null command `:` only returns exit code 0 to ensure following task executions.
shellcheck -f checkstyle *.sh > shellcheck.xml || :
"""
recordIssues(tools: [checkStyle(pattern: 'shellcheck.xml')])
}
}
}
Running the build generates a nice trend diagram like follows.
Running shellcheck for all files merging the output in a single xml file didn't play well with recordIssues in my case.
I had create individual report for each source file to make it work.
stage('Shellcheck') {
steps {
catchError(
buildResult: hudson.model.Result.SUCCESS.toString(),
stageResult: hudson.model.Result.UNSTABLE.toString(),
message: "shellcheck error detected, but allowing job to continue!")
{
sh '''
# shellcheck with all files in single xml doesnt play well with jenkins report
ret=0
for file in $(grep -rl '^#!/.*/bash' src); do
echo shellcheck ${file}
mkdir -p .checkstyle/${file}/
shellcheck -f checkstyle ${file} > .checkstyle/${file}/shellcheck.xml || (( ret+=$? ))
done
exit ${ret}
'''
}//catchError
}//steps
post {
always {
recordIssues(tools: [checkStyle(pattern: '.checkstyle/**/shellcheck.xml')])
}
}//post
}//stage

bats - how can i echo the file name in a bats script for reporting

I have some bats scripts that I run to test some functionality
how can I echo the bats file name in the script?
my bats script looks like:
#!/usr/bin/env bats
load test_helper
echo $BATS_TEST_FILENAME
#test "run cloned mission" {
blah blah blah
}
in order for my report to appear as:
✓ run cloned mission
✓ run cloned mission
✓ addition using bc
---- TEST NAME IS xxx
✓ run cloned mission
✓ run cloned mission
✓ addition using bc
---- TEST NAME IS yyy
✓ run cloned mission
✓ run cloned mission
✓ addition using bc
but got the error
2: syntax error:
operand expected (error token is ".bats
2")
what is the correct way to do it?
I don't want to change the sets names for it only to echo the filename between different tests.
Thanks.
TL;DR
Just output the file name from the setup function using a combination of prefixing the message with # and redirecting it to fd3 (documented in the project README).
#!/usr/bin/env bats
setup() {
if [ "${BATS_TEST_NUMBER}" = 1 ];then
echo "# --- TEST NAME IS $(basename ${BATS_TEST_FILENAME})" >&3
fi
}
#test "run cloned mission" {
blah blah blah
}
All your options
Just use BASH
The simplest solution is to just iterate all test files and output the filename yourself:
for file in $(find ./ -name '*.bats');do
echo "--- TEST NAME IS ${file}"
bats "${file}"
done
The downside of this solution is that you lose the summary at the end. Instead a summary will be given after each single file.
Use the setup function
The simplest solution within BATS is to output the file name from a setup function. I think this is the solution you are after.
The code looks like this:
setup() {
if [ "${BATS_TEST_NUMBER}" = 1 ];then
echo "# --- TEST NAME IS $(basename ${BATS_TEST_FILENAME})" >&3
fi
}
A few things to note:
The output MUST begin with a hash #
The MUST be a space after the hash
The output MUST be redirected to file descriptor 3 (i.e. >&3)
A check is added to only output the file name once (for the first test)
The downside here is that the output might confuse people as it shows up in red.
Use a skipped #test
The next solution would be to just add the following as the first test in each file:
#test "--- TEST NAME IS $(basename ${BATS_TEST_FILENAME})" {
skip ''
}
The downside here is that there will be an addition to the amount of skipped tests...
Use an external helper function
The only other solution I can think of would be to create a test helper that lives in global scope and keeps tracks of its state.
Such code would look something like this:
output-test-name-helper.bash
#!/usr/bin/env bash
create_tmp_file() {
local -r fileName="$(basename ${BATS_TEST_FILENAME})"
if [[ ! -f "${BATS_TMPDIR}/${fileName}" ]];then
touch "${BATS_TMPDIR}/${fileName}"
echo "---- TEST NAME IS ${fileName}" >&2
fi
}
remove_tmp_file() {
rm "${BATS_TMPDIR}/$(basename ${BATS_TEST_FILENAME})"
}
trap remove_tmp_file EXIT
create_tmp_file
Which could then be loaded in each test:
#!/usr/bin/env bats
load output-test-name-helper
#test "run cloned mission" {
return 0
}
The major downside here is that there are no guarantees where the output is most likely to end up.
Adding output from outside the #test, setup and teardown functions can lead to unexpected results.
Such code will also be called (at least) once for every test, slowing down execution.
Open a pull-request
As a last resort, you could patch the code of BATS yourself, open a pull-request on the BATS repository and hope this functionality will be supported natively by BATS.
Conclusion
Life is a bunch of tradeoffs. Pick a solution that most closely fits your needs.
I've figured out a way to do this, but it requires you changing how you handle your individual setup in each file.
Create a helper file that defines a setup function that does as Potherca described above:
global.bash:
test_setup() { return 0; }
setup() {
(($BATS_TEST_NUMBER==1)) \
&& echo "# --- $(basename "$BATS_TEST_FILENAME")" >&3
test_setup
}
Then in your test, instead of calling setup you would just load 'global'.
If you need to create a setup for a specific file, then instead of creating a setup function, you'd create a test_setup function.
Putting the echo in setup outputs the file name after the test name.
What I wound up doing is adding the file name to the test name itself:
test "${BATS_TEST_FILENAME##*/}: should …" {
…
}
Also, if going the setup route, the condition can be avoided with:
function setup() {
echo "# --- $(basename "$BATS_TEST_FILENAME")" >&3
function setup() {
…
}
setup
}

How setup Visual Studio Code for Run/Debug of F# projects/scripts?

I have tried to use Visual Studio Code for run a simple F# script.
I download all recent versions as today. I install all the plugins at http://ionide.io/. Despite the nice animated gifs that show that it works, I'm unable to see how make to work the Build of code.
I create a .ionide file:
[Fake]
linuxPrefix = "mono"
command = "build.cmd"
build = "build.fsx"
But then, how install Fake? So, I do this from xamarin and install it. Ok, so now I get the build.fsx:
#r "packages/FAKE.4.12.0/tools/FakeLib.dll" // include Fake lib
RestorePackages()
// Properties
let buildDir = "./build/"
let testDir = "./test/"
let deployDir = "./deploy/"
// version info
let version = "0.2" // or retrieve from CI server
// Targets
Target "Clean" (fun _ ->
CleanDirs [buildDir; testDir; deployDir]
)
Target "fakeBuild" (fun _ ->
!! "./*.fsproj"
|> MSBuildRelease buildDir "Build"
|> Log "AppBuild-Output: "
)
Target "Default" (fun _ ->
trace "Hello World from FAKE"
)
// Dependencies
"Clean"
==> "fakeBuild"
==> "Default"
// start build
RunTargetOrDefault "Default"
Run the Fake:Build command and get:
No handler found for the command: 'fake.fakeBuild'. Ensure there is an activation event defined, if you are an extension.
And now get lost.
install yeoman: ">ext install yeoman"
then setup a stand alone project with >yo
and follow the instructions and say yes to paket and FAKE.
then >paket init
and >paket install and it should work.
to get the > use ctrl+shift+p
For the Atom IDE you also have to install the yeoman npm package which I describe here: http://www.implementingeventsourcingwithfsharp.com/?p=61
how to install the package is descibed here: https://www.npmjs.com/package/generator-fsharp
not sure you need it for Visual Studio Code
Hope this helps
the usual way of doing this is to have a bash script that calls your F# script. Your bash script should look like something like:
#!/bin/bash
if test "$OS" = "Windows_NT"
then # For Windows
.paket/paket.bootstrapper.exe
exit_code=$?
if [ $exit_code -ne 0 ]; then
exit $exit_code
fi
.paket/paket.exe restore
exit_code=$?
if [ $exit_code -ne 0 ]; then
exit $exit_code
fi
packages/FAKE/tools/FAKE.exe $# --fsiargs build.fsx
else #For non Windows
mono .paket/paket.bootstrapper.exe
exit_code=$?
if [ $exit_code -ne 0 ]; then
exit $exit_code
fi
mono .paket/paket.exe restore
exit_code=$?
if [ $exit_code -ne 0 ]; then
exit $exit_code
fi
mono packages/FAKE/tools/FAKE.exe $# --fsiargs build.fsx
fi
Now, you can define your build steps in your build.fsx script
#r "packages/FAKE/tools/FakeLib.dll"
open Fake
// Targets
// Dependencies
// Default target
Hope it helps.
I got it working.
That said, I'm almost as lost as you, the available documentation is not very complete IMO. Here's what you have to do (you tagged osx-elcapitan so I'm assuming OS X):
get rid of the .ionide file, you only need it if you don't want to use the defaults. Lets stick to the defaults for now to keep things simple.
make sure the path to FakeLib.dll is correct in your build.fsx
Create a file named build.sh with the following script (make sure the path to FAKE.exe is right):
mono packages/FAKE.4.12.0/tools/FAKE.exe build.fsx $#
If it fails again, post the output error (click the OPEN button at the top for the FAKE command)
PS: Your question is two months old so I apologize if you know all this already.

bash: wrong behavior in for... loop together with a test statement

I am trying to test if certain files, called up in a list of textfiles, are in a certain directory. Every once in a while (and I am quite certain I use the same statements every time) I get an error, complaining that the echo command cannot be found.
The textfiles I have in my directory /audio/playlists/ are named according to their date on which they are supposed to be used: 20130715.txt for example for today:
me#computer:/some/dir# ls /audio/playlists/
20130715.txt 20130802.txt 20130820.txt 20130907.txt 20130925.txt
20130716.txt 20130803.txt 20130821.txt 20130908.txt 20130926.txt
(...)
me#computer:/some/dir# cat /audio/playlists/20130715.txt
#A Comment line goes here
00:00:00 141-751.mp3
00:03:35 141-704.mp3
00:06:42 140-417.mp3
00:10:46 139-808.mp3
00:15:13 136-126.mp3
00:20:26 071-007.mp3
(...)
23:42:22 136-088.mp3
23:46:15 128-466.mp3
23:50:15 129-592.mp3
23:54:29 129-397.mp3
So much for the facts. The following statement, which lets me test if all files called upon in all of the textfiles in the given directory are actually a file in the directory /audio/mp3/, produces an error:
me#computer:/some/dir# for i in $(cat /audio/playlists/*.txt|cut -c 10-16|sort|uniq); do [ -f "/audio/mp3s/$i.mp3" ] || echo $i; done
 echo: command not found
me#computer:/some/dir#
I would guess bash wants to complain about the "A Comment"-line (actually " line ") not being a file, but why would that cause echo not to be found? Again, mostly this works, but every so often I get this error. Any help is greatly appreciated.
That space before echo isn't U+0020, it's U+00A0. And indeed, the command " echo" doesn't exist.

Ebuild example for project in Go

I intend to create a Gentoo ebuild for a project written in Go, and I'm wondering whether this has been done before.
As building and installing a Go project from source seems sufficiently different compared to projects in other programming languages, it would be useful for me to compare to an existing ebuild for figuring out best practices.
However, in the current portage tree I can't find any package that would depend on "dev-lang/go". Is there such an ebuild, perhaps in an overlay?
How about go-overlay to look for example ebuilds? They wrote a special ebuild class for building Go apllications and libraries in a few lines. Let me use their dev-util/flint-0.0.4 ebuild as an illustration (all comments are mine):
EAPI=6
GOLANG_PKG_IMPORTPATH="github.com/pengwynn"
GOLANG_PKG_VERSION="c3a5d8d9a2e04296fba560d9a22f763cff68eb75"
# Many Go projects don't pin versions of their dependencies,
# so it may has to be done here. You might not need this step if
# the upstream already uses 'godep' or simular tool.
GOLANG_PKG_DEPENDENCIES=(
"github.com/codegangsta/cli:142e6cd241"
"github.com/fatih/color:1b35f289c4"
"github.com/octokit/go-octokit:4408b5393e"
"github.com/fhs/go-netrc:4422b68c9c"
"github.com/jingweno/go-sawyer:1999ae5763"
"github.com/shiena/ansicolor:264b056680"
"github.com/jtacoma/uritemplates:0a85813eca"
)
# Since many projects don't require custom build steps,
# this single line may be enough.
inherit golang-single
# Nothing special about these variables.
DESCRIPTION="Check your project for common sources of contributor friction"
HOMEPAGE="https://${GOLANG_PKG_IMPORTPATH}/${PN}"
LICENSE="MIT"
KEYWORDS="amd64 x86 arm"
# Prevent simulateneous installing with 'dev-go/flint'.
# Honestly, I was unable to this package on the Internet.
SLOT="0"
DEPEND="!dev-go/${PN}"
Here is a working example of an ebuild which installs a go project:
https://github.com/timboudreau/gentoo/blob/master/net-misc/syncthing/syncthing-0.11.7.ebuild
It is possible. I just made one in my overlay. It was a little bit painful, but it works.
There are a few important things, that have to be done.
Create golang eclasses in your repository, if you don't have go-overlay added in your system.
GOLANG_PKG_IMPORTPATH variable specifies a GitHub profile, from which will be downloaded source code.
GOLANG_PKG_DEPENDENCIES variable specifies GitHub repositories and particular commits of all dependencies.
inherit golang-single imports mentioned eclass and at the same time adds dev-lang/go into dependencies.
It looks like there's an existing, working ebuild.
From: https://gist.github.com/matsuu/233858 (and also found at http://git.overlays.gentoo.org/gitweb/?p=proj/glentoo-overlay.git;a=blob_plain;f=dev-lang/golang-platform/golang-platform-9999.ebuild;hb=HEAD)
# Copyright 1999-2009 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: $
EAPI="2"
inherit elisp-common eutils mercurial toolchain-funcs
DESCRIPTION="The Go Programming Language"
HOMEPAGE="http://golang.org/"
SRC_URI=""
EHG_REPO_URI="https://go.googlecode.com/hg/"
EHG_REVISION="release"
LICENSE="BSD"
SLOT="0"
KEYWORDS="~amd64 ~x86"
IUSE="emacs vim-syntax"
RESTRICT="test"
RDEPEND="sys-devel/gcc"
DEPEND="${RDEPEND}
emacs? ( virtual/emacs )
sys-devel/bison
sys-apps/ed"
S="${WORKDIR}/hg"
ENVFILE="${WORKDIR}/50${PN}"
src_prepare() {
GOBIN="${WORKDIR}/bin"
mkdir -p "${GOBIN}" || die
sed -i \
-e "/^GOBIN=/s:=.*:=${GOBIN}:" \
-e "/MAKEFLAGS=/s:=.*:=${MAKEOPTS}:" \
src/make.bash || die
sed -i \
-e "/^CFLAGS=/s:-O2:${CFLAGS}:" \
src/Make.conf || die
case ${ARCH} in
x86)
GOARCH="386"
;;
*)
GOARCH="${ARCH}"
;;
esac
case ${CHOST} in
*-darwin*)
GOOS="darwin"
;;
*)
GOOS="linux"
;;
esac
# *-nacl*)
# GOOS="nacl"
# ;;
cat > "${ENVFILE}" <<EOF
GOROOT="/usr/$(get_libdir)/${PN}"
GOARCH="${GOARCH}"
GOOS="${GOOS}"
EOF
. "${ENVFILE}"
export GOBIN GOROOT GOARCH GOOS
}
src_compile() {
cd src
PATH="${GOBIN}:${PATH}" GOROOT="${S}" CC="$(tc-getCC)" ./make.bash || die
if use emacs ; then
elisp-compile "${S}"/misc/emacs/*.el || die
fi
}
src_test() {
cd src
PATH="${GOBIN}:${PATH}" GOROOT="${S}" CC="$(tc-getCC)" ./run.bash || die
}
src_install() {
dobin "${GOBIN}"/* || die
insinto "${GOROOT}"
doins -r pkg || die
if use emacs ; then
elisp-install ${PN} "${S}"/misc/emacs/*.el* || die "elisp-install failed"
fi
if use vim-syntax ; then
insinto /usr/share/vim/vimfiles/plugin
doins "${S}"/misc/vim/go.vim || die
fi
doenvd "${ENVFILE}" || die
dodoc AUTHORS CONTRIBUTORS README || die
dohtml -r doc/* || die
}
pkg_postinst() {
elog "please don't forget to source /etc/profile"
}
Sorry, I haven't tested it as I don't have a running Gentoo instance right now. Hope it works.

Resources