Integrating Go and Bazel tests - go

In my CI system, I have various go scripts that I run to analyze my Go code. For example, I have a script that validates if various main files can start a long running app successfully. For this I run the go script via go run startupvalidator -pkgs=pkg1,pkg2,pk3. I am interested in using Bazel to be able to utilize the cache for this since if pkg1 has not changed startupvalidator would be able to hit the cache for pkg1 and then run a fresh run for pkg2 and pkg3.
I thought about a couple different ways to do this but none of them feel correct. Is there a "best" way to accomplish this? Is this a reasonable use case for bazel?
I thought about creating a bash script where I run something like:
go run startupvalidator $1
With a BUILD.bazel file containing
sh_binary(
name = "startupvalidator-sh",
sources = [":startupvalidator.sh"],
deps = [
"//go/path/to/startupvalidator",
],
)
I also thought about placing a similar sh_test in the BUILD.bazel file for each pkg1, pkg2, and pkg3 so that I could run bazel run //go/pkg1:startupvalidator.
However, this doesn't actually work. Does anyone have feedback on how I should go about this? Any directions or pointers are appreciated.

To take advantage of the caching for test results, you need a *_test which you run with bazel test. Maybe the only part you're missing is that bazel run simply runs a binary (even if it's a test binary), while bazel test is looking for an up-to-date test result which means it use the cache?
You also need to split up the binary so changing the code in pkg2 doesn't affect the test action in pkg1. The action's key in the cache includes the contents of all its input files, the command being run, etc. I'm not sure if your startupvalidator has the various main functions compiled into it, or if it looks for the binaries at runtime. If it compiles them in, you'll need to build separate ones. If it's loading the files at runtime, put the files it looks for in data for your test rule so they're part of the inputs to the test action.
I'd do something like this in pkg1 (assuming it's loading files at runtime; if they're compiled in then you can just make separate go_test targets):
sh_test(
name = 'startupvalidator_test',
srcs = ['startupvalidator_test.sh'],
deps = ['#bazel_tools//tools/bash/runfiles'],
data = ['//go/path/to/startupvalidator', ':package_main'],
)
with a startupvalidator_test.sh which looks like:
# --- begin runfiles.bash initialization v2 ---
# Copy-pasted from the Bazel Bash runfiles library v2.
set -uo pipefail; f=bazel_tools/tools/bash/runfiles/runfiles.bash
source "${RUNFILES_DIR:-/dev/null}/$f" 2>/dev/null || \
source "$(grep -sm1 "^$f " "${RUNFILES_MANIFEST_FILE:-/dev/null}" | cut -f2- -d' ')" 2>/dev/null || \
source "$0.runfiles/$f" 2>/dev/null || \
source "$(grep -sm1 "^$f " "$0.runfiles_manifest" | cut -f2- -d' ')" 2>/dev/null || \
source "$(grep -sm1 "^$f " "$0.exe.runfiles_manifest" | cut -f2- -d' ')" 2>/dev/null || \
{ echo>&2 "ERROR: cannot find $f"; exit 1; }; f=; set -e
exec $(rlocation workspace/go/path/to/startupvalidator) \
-main=$(rlocation workspace/pkg1/package_main)
# --- end runfiles.bash initialization v2 ---
I'm assuming that package_main is the thing loaded by startupvalidator. Bazel is set up to pass full paths to dependencies like that to other binaries, so I'm pretending that there's a new flag that takes the full path instead of just the package name. The shell script uses runfiles.bash to locate the various files.
If you want to deduplicate this between the packages, I would write a macro that uses a genrule to generate the shell script.

Related

Aggregating `bazel test` reports when testing many targets

I am trying to aggregate all the test.xml reports generated after a bazel test run. The idea is to then upload this full report to a CI platform with a nicer interface.
Consider the following example
$ find .
foo/BUILD
bar/BUILD
$ bazel test //...
This might generate
./bazel-testlogs/foo/tests/test.xml
./bazel-testlogs/foo/tests/... # more
./bazel-testlogs/bar/tests/test.xml
./bazel-testlogs/bar/tests/... # more
I would love to know if there is a better way to aggregate these test.xml files into a single report.xml file (or the equivalent). This way I only need to publish 1 report file.
Current solution
The following is totally viable, I just want to make sure I am not missing some obvious built in feature.
find ./bazel-testlogs | grep 'test.xml' | xargs [publish command]
In addition, I will check out the JUnit output format, and see if just concatenating the reports is sufficient. This might work much better.

Reading in a list of file paths in a Dockerfile and copying each file from one build stage to another

Overview:
I am currently trying to dockerize my C++ program in Docker. I am using a multi-stage build structure, where the first stage is responsible for downloading all of the required dependencies and then building/installing them. The second (and final) stage then copies the required executable and shared libraries from the first stage to create the final image. It should also be noted that I need to use this multi-stage structure, as the builder stage uses private ssh keys to clone private git repositories and therefore needs to be purged from the final image.
Problem:
Because the first stage installs packages via apk-add and also clones and installs packages from git, dependencies for my final executable show up in various locations within the filesystem (mainly /usr/lib, /usr/local/lib, /lib). To handle this, in the build stage, I run a 'ldd' command on my main executable with some regex that pipes all of the .so library paths into a text file. I then copy this text file as well as the main executable into the second stage where I would like to read the contents of the text file and copy each line (library path) from the build stage to the second stage. However, given that Dockerfiles do not support looping without a bash script so I don't see a way that I could copy the libraries from the build stage to the final stage. I have attached some code below. Thank you for the help!
#
# ... instructions that install libraries via apk-add and from source
#
# Generate a list of all of the libaries our bin directory depends on and pipe it into a text file
# so our later build stages can copy the absolute minimum neccessary libraries
RUN find /root/$PROJ_NAME/bin -type f -perm /a+x -exec ldd {} \; \
| grep so \
| sed -e '/^[^\t]/ d' \
| sed -e 's/\t//' \
| sed -e 's/.*=..//' \
| sed -e 's/ (0.*)//' \
| sort \
| uniq \
>> /root/$PROJ_NAME/LIBRARY_DEPS.txt
#
# STAGE 2: Build fetched repos and install apk packages
#
FROM alpine:edge
ARG PROJ_NAME
ARG USER_ID
ARG GROUP_ID
# Copy over our main project directory from STAGE 1, which also includes
# the ldd paths of our main executable
COPY --from=builder /root/$PROJ_NAME /root/$PROJ_NAME
# PROBLEM: This is where I am stuck...would like to copy all of the libraries
# whose paths are specified in LIBRARY_DEPS.txt from the builder stage to the current stage
COPY --from=builder (cat /root/$PROJ_NAME/LIBRARY_DEPS.txt) ./
ENTRYPOINT ["./MainExecutable"]

Bash script to obtain the newest file X in a folder and create a new variable called X+1

I am trying to create a loop in Bash script for a series of data migrations:
At the beginning of every step, the script should get the name of the newest file in a folder
called "migrationfiles/ and store it in the variable "migbefore" and create a new variable called "migbefore+1":
Example: if the "migrationfiles/" folder contains the following files:
migration.pickle1 migration.pickle2 migration.pickle3
The variable "migbefore" and migafter should have the following value:
migbefore=migration.pickle3
migafter=migration.pickle4
At the end of every step, the function "metl", which is in charge of making the data migration, uses the file "migbefore" to load the data and creates 1 new file called "migafter" and stores it in the "migrationfiles/" folder, so in this case, the new file created will be called:
"migration.pickle4"
The code I pretend using is the following:
#!/bin/bash
migbefore=0
migafter=0
for y in testappend/*
for x in migrationfiles/*
do
migbefore=migration.pickle(oldest)
migafter=migbefore+1
done
do
metl -m migrationfiles/"${migbefore}"
-t migrationfiles/"${migafter}"
-s "${y}"
config3.yml
done
Does anyone know how I could make the first loop (The one that searches for the newest file in the "migrationfiles/" folder) and then assigns the name of the variable "migafter" as "migbefore+1"?
I think this might do what you want.
#!/bin/bash
count=0
prefix=migration.pickle
migbefore=$prefix$((count++))
migafter=$prefix$((count++))
for y in testappend/*; do
echo metl -m migrationfiles/"${migbefore}" \
-t migrationfiles/"${migafter}" \
-s "${y}" \
config3.yml
migbefore=$migafter
migafter=$prefix$((count++))
done
Copy with Numbered Backups
It's really hard to tell what you're really trying to do here, and why. However, you might be able to make life simpler by using the --backup flag from the cp command. For example:
cp --backup=numbered testappend/migration.pickle migrationfiles/
This will ensure that you have a sequence of migration files like:
migration.pickle
migration.pickle.~1~
migration.pickle.~2~
migration.pickle.~3~
where the older versions have larger ordinal numbers, while the latest version has no ordinal extension. It's a pretty simple system, but works well for a wide variety of use cases. YMMV.
# configuration:
path=migrationfiles
prefix=migration.pickle
# determine number of last file:
last_number=$( find ${path} -name "${prefix}*" | sed -e "s/.*${prefix}//g" | sort -n | tail -1 )
# put together the file names:
migbefore=${prefix}${last_number}
migafter=${prefix}$(( last_number + 1 ))
# test it:
echo $migbefore $migafter
This should work even if there are no migration files yet. In that case, the value of migbefore is just the prefix and does not point to a real file.

Can I write some output to the current document and some to shell output?

I'm using gedit and trying to wrestle it into a real IDE. What I want is to map ctrlshift| to run my tidy tool when the file type is "text/html" and my autopep8 tool when the file type is "text/x-python".
As it turns out (and I think this is a bug), gedit doesn't care what filetype you've specified. If you have a key combo set, it will run a tool whether or not the filetype matches. Related, but maybe not a bug, I can only set the keyboard shortcut to one external tool.
So I wrote one external tool that runs on ctrlshift| and runs autopep8 if the document is Python and tidy if the document is HTML:
#!/bin/sh
# [Gedit Tool]
# Save-files=document
# Shortcut=<Primary><Shift>bar
# Output=replace-document
# Name=Tidy by Filetype
# Applicability=all
# Input=document
if [ $GEDIT_CURRENT_DOCUMENT_TYPE = "text/x-python" ]; then
autopep8 - -v -a
elif [ $GEDIT_CURRENT_DOCUMENT_TYPE = "text/html" ]; then
#-i auto indents, -w 80 wrap at 80 chars, -c replace font tags w/CSS
exec tidy -utf8 -i -w 80 -c "$GEDIT_CURRENT_DOCUMENT_NAME"
elif [ $GEDIT_CURRENT_DOCUMENT_TYPE = "text/css" ]; then
#TK CSS tidy
else
echo "This seems to be " $GEDIT_CURRENT_DOCUMENT_TYPE " I don't know how to tidy that."
fi
That second to last line is the one that is breaking my heart. If I don't define any action for that last else it just deletes my existing document. If I run ctrlshift| and the file-type isn't one that I've accounted for, I'd like it to report the file type to the shell output, not replace the document contents with
This seems to be application/x-shellscript I don't know how to tidy
that.
Is there a way to write my tool so that I write some output to the shell and some to the document?
Having no experience with trying to make gedit more usable, I did find this: https://wiki.gnome.org/Apps/Gedit/Plugins/ExternalTools?action=AttachFile&do=view&target=external-tools-manager-with-gedit-3.png
The key construct here is
echo "..." > /dev/stderr
At least that should stop you from clobbering the contents of your file if it the mime-type doesn't match, but I'm not sure where it'll print out (hopefully somewhere sane).
Edit (for a more complete answer):
You will also need to cat $GEDIT_CURRENT_DOCUMENT_NAME to replace the file contents with itself. This was pointed out by #markku-k (thanks!)

Join multiple Coffeescript files into one file? (Multiple subdirectories)

I've got a bunch of .coffee files that I need to join into one file.
I have folders set up like a rails app:
/src/controller/log_controller.coffee
/src/model/log.coffee
/src/views/logs/new.coffee
Coffeescript has a command that lets you join multiple coffeescripts into one file, but it only seems to work with one directory. For example this works fine:
coffee --output app/controllers.js --join --compile src/controllers/*.coffee
But I need to be able to include a bunch of subdirectories kind of like this non-working command:
coffee --output app/all.js --join --compile src/*/*.coffee
Is there a way to do this? Is there a UNIXy way to pass in a list of all the files in the subdirectories?
I'm using terminal in OSX.
They all have to be joined in one file because otherwise each separate file gets compiled & wrapped with this:
(function() { }).call(this);
Which breaks the scope of some function calls.
From the CoffeeScript documentation:
-j, --join [FILE] : Before compiling, concatenate all scripts together in the order they were passed, and write them into the specified file. Useful for building large projects.
So, you can achieve your goal at the command line (I use bash) like this:
coffee -cj path/to/compiled/file.js file1 file2 file3 file4
where file1 - fileN are the paths to the coffeescript files you want to compile.
You could write a shell script or Rake task to combine them together first, then compile. Something like:
find . -type f -name '*.coffee' -print0 | xargs -0 cat > output.coffee
Then compile output.coffee
Adjust the paths to your needs. Also make sure that the output.coffee file is not in the same path you're searching with find or you will get into an infinite loop.
http://man.cx/find |
http://www.rubyrake.org/tutorial/index.html
Additionally you may be interested in these other posts on Stackoverflow concerning searching across directories:
How to count lines of code including sub-directories
Bash script to find a file in directory tree and append it to another file
Unix script to find all folders in the directory
I've just release an alpha release of CoffeeToaster, I think it may help you.
http://github.com/serpentem/coffee-toaster
The most easy way to use coffee command line tool.
coffee --output public --join --compile app
app is my working directory holding multiple subdirectories and public is where ~output.js file will be placed. Easy to automate this process if writing app in nodejs
This helped me (-o output directory, -j join to project.js, -cw compile and watch coffeescript directory in full depth):
coffee -o web/js -j project.js -cw coffeescript
Use cake to compile them all in one (or more) resulting .js file(s). Cakefile is used as configuration which controls in which order your coffee scripts are compiled - quite handy with bigger projects.
Cake is quite easy to install and setup, invoking cake from vim while you are editing your project is then simply
:!cake build
and you can refresh your browser and see results.
As I'm also busy to learn the best way of structuring the files and use coffeescript in combination with backbone and cake, I have created a small project on github to keep it as a reference for myself, maybe it will help you too around cake and some basic things. All compiled files are in www folder so that you can open them in your browser and all source files (except for cake configuration) are in src folder. In this example, all .coffee files are compiled and combined in one output .js file which is then included in html.
Alternatively, you could use the --bare flag, compile to JavaScript, and then perhaps wrap the JS if necessary. But this would likely create problems; for instance, if you have one file with the code
i = 0
foo = -> i++
...
foo()
then there's only one var i declaration in the resulting JavaScript, and i will be incremented. But if you moved the foo function declaration to another CoffeeScript file, then its i would live in the foo scope, and the outer i would be unaffected.
So concatenating the CoffeeScript is a wiser solution, but there's still potential for confusion there; the order in which you concatenate your code is almost certainly going to matter. I strongly recommend modularizing your code instead.

Resources