Can stack _not_ squelch output when building multiple targets? - haskell-stack

When using stack in a project with multiple cabal files, there seems to be a difference in verbosity when using a stack command on a single target vs multiple targets.
This seems to happen for all sorts of stack commands, so I'll just give a few examples.
With stack build sub-project-1, stack will show progress on each of the modules it compiles. But stack build sub-project-1 sub-project-2 or stack build will suddenly show only which target it is working on and not show any of the modules it is working on.
Another example I have found recently is in stack test. I wanted to get a full list of all the Tasty tests I have, so I ran stack test --test-arguments="-l". But all it printed out was:
sub-project-1-0.0.0.1: test (suite: run-tests, args: -l)
sub-project-2-0.0.0.1: test (suite: run-tests, args: -l)
Completed 2 action(s).
Log files have been written to: /projdir/.stack-work/logs/
Even if I manually specify the targets I want eg:stack test sub-project-1 sub-project-2 --test-arguments="-l", it gives me the same unhelpful output.
I have to run stack with exactly one target: stack test sub-project-1 --test-arguments="-l" to get any of the output I am looking for:
sub-project-1-0.0.0.1: test (suite: run-tests, args: -l)
all tests/tasty tests/this is a test
all tests/tasty tests/this is another test
Is there anything I can do to get stack to not squelch the output when running against more than one package? Stack verbosity levels don't seem to have anything to do with this. They seem to consider -v to mean "print out debugging statements"

Thanks to sjakobi for the suggestion.
Recent versions of stack have an option: --dump-logs
This option will show the output from each action stack takes on each target.

Related

`go test` only prints "open : no such file or directory"

I rewrite a program and just removed a lot of code, by just making it a comment. After doing that and adding some tests, it is impossible to run the program anymore.
when running go build it has no errors at all.
But when running go test i only become some weird output:
$ go test
2020/05/05 19:14:24 open : no such file or directory
exit status 1
FAIL fwew_lib 0.002s
This error occurs, before a single test is even run, so within the test framework itself.
Why is there is no file specified that is not found? Any idea, what caused this error and how to fix it?
This error also occurred on multiple machines with windows and linux. And with go 1.14.2 and go 1.13.7.
To get this error yourself:
Repo: https://github.com/knoxfighter/fwew/tree/library
Branch: library
Just download the branch and run go test
Your fork is missing this line from the parent
texts["dictionary"] = filepath.Join(texts["dataDir"], "dictionary.txt")
link
But your fork still has this line which depends on the one mentioned above
Version.DictBuild = SHA1Hash(texts["dictionary"])
link
And so the SHA1Hash "fatals" out since you're essentially passing it an empty string.
link

How to detect compiler warnings in gitlab CI

In the steps of setting up CI builds on our gitlab server, I can't seem to find information on how to set up the detection of compiler warnings. Example build output:
[100%] Building CXX object somefile.cpp.o
/home/gitlab-runner/builds/XXXXXXX/0/group/project/src/somefile.cpp:14:2: warning: #warning ("This is a warning to test gitlab") [-Wcpp]
#warning("This is a warning to test gitlab")
^
However the build result is success instead of warning or something similar. Ideally the results wuold also be visible on the merge request on the feature (and block the merge if possible).
I can't imagine I'm the only one trying to achieve this, so I am probably looking in the wrong direction. The 'best' solution I found is to somehow manually parse the build output and generate a JUnit report.
How would I go about doing this without allowing the build job to fail, since I would like the job to fail when compiler errors occur.
Update
For anyone stumbling across this question later, and in lieu of a best practice, this is how I solved it:
stages:
- build
- check-warnings
shellinspector:
stage: build
script:
- cmake -Bcmake-build -S.
- make -C cmake-build > >(tee make.output) 2> >(tee make.error)
artifacts:
paths:
- make.output
- make.error
expire_in: 1 week
analyse build:
stage: check-warnings
script:
- "if [[ $(cat make.error | grep warning -i) ]]; then cat make.error; exit 1; fi"
allow_failure: true
This stores the build output errors in make.error in the first stage, the next stage then queries that file for warnings and fails that stage with allow_failure: true to create the passed with warning pipeline status I was looking for.
It seems that the solution to such need (e.g., see the issue "Add new CI state: has-warnings" https://gitlab.com/gitlab-org/gitlab-runner/issues/1224) has been to introduce the allow_failure option so that one job can be the compilation itself, which is not allowed to fail (if it does, then the pipeline fails) and another job can be the detection of such warnings which is allowed to fail (if one is found then the pipeline will not fail).
Also the possibility of defining warning regex in the .gitlab-ci.yml has been requested but it does not exist yet.

How to find out version of ghc that corresponds to snapshot without causing download of ghc

Need a way to determine the ghc version associated with a given snapshot without having to download ghc.
I could run this command to get the information, but unfortunately, it first downloads the ghc before printing out the version:
stack query compiler wanted
I need this because I am optimising a CI build and having a way to obtain the ghc version allows me to use that as a key to retrieve the appropriate build cache. The fact that trying to query this information triggers a download undermines any optimisation I attempt.
Not ideal but one can parse the stdout or stderr of
stack --resolver SNAPSHOT --no-install-ghc query
or stack --resolver SNAPSHOT --no-install-ghc path --compiler-exe.
Or here is a small program using pantry to get the info:
import Pantry
main :: IO ()
main = do
rsnap <-
runPantryApp $ do
let rurl = defaultSnapshotLocation (LTS 18 0)
url <- completeSnapshotLocation rurl
loadSnapshot url
print $ rsCompiler rsnap
#mgsloan gave me enough information to put together a solution, which can be found here:
https://github.com/haskell-works/hw-dsv/blob/master/scripts/ghc-version

Docker: When Checkstyles detects error, abort

Hi I am editing my android docker instance which builds my android APK.
I want to add a checkstyle exception which should cause an abort if Any warnings occure.
I have it working in that it Runs checkstyle, but it just output warnings. I do not see a way of making these errors or halting the operation like Lint does. What should I add to my docker file?
java -jar ./styleguide/checkstyle-7.7-all.jar -c ./styleguide/rules/google_checks.xml .
As I do not have the google indentation I get 18k errors that look like
[WARN] pathstuff/./app/src/testRelease/java/com/app/BuildConfigReleaseTest.java:41: 'method def rcurly' has incorrect indentation level 4, expected level should be 2. [Indentation]
Audit done.
These are what I want to abort on. Preferably list all of them, but if we just list that they need to run checkstyles -- that will be enough.
Thanks!
I have it working in that it Runs checkstyle, but it just output warnings.
This is being overridden inside the google_checks.xml file. Checkstyle by default, will print everything as errors. If anything else comes up, then the configuration is overriding it.
I do not see a way of making these errors
Open up google_checks.xml and look for the line similar to: <property name="severity" value="warning"/>
Change warning to error in the value attribute and it will print violations as errors.

Travis-ci boost log compilation with biicode time-out

I am using travis-ci and biicode to build my project who is depending on boost log. But boost log times are longer than 10 min so I get this message:
No output has been received in the last 10 minutes, this potentially indicates a
stalled build or something wrong with the build itself.
The build has been terminated
The build is working correctly, it's just that boost log is really long to compile with limited resources (I tried to compile it on a VM with 1 CPU and 2GB of RAM and it took almost more than 15 min)
I know this is happening because there is not enough verbose going on so I tried the following flags:
>bii cpp:build -- VERBOSE=1
In the CMakeList.txt, set BII_BOOST_VERBOSE ON as mentionnened here
Set BOOST_LOG_COMPILE_FAST_ON as explained here
Using travis_wait
Actually travis_wait seems to be the correct solution but when I put it in my .travis.yml like this
script: travis_wait bii cpp:build
It does actually doesn't output logs like usually and just time out after 20 min. I don't think the actual building is taking place
What is the correct way to handle this problem?
This is a known issue, Boost.Log takes a long time to compile.
You can use travis_wait to call bii cpp:configure, but I'm with you, I need log feedback (No pun intended). However, I have tried that too and leaded to >50min build, which means travis aborts build on free accounts :( Of course my repo does not build Boost.Log only.
Just as a note, here's part of the settings.py file from the boost-biicode repo:
#Boost.Log takes so much time to compile, leads to timeouts on Travis CI
#It was tested on Windows and linux, works 'ok' (Be careful with linking settings)
if args.ci: del packages['examples/boost-log']
I'm currently working on a solution, launching asynchronous builds while printing progress. Check this issue. It will be ready for this week :)
To speed-up your build, try to play with BII_BOOST_BUILD_J variable to set the number of threads you want for building Boost components. Here's an example:
script:
- bii cpp:configure -DBII_BOOST_BUILD_J=4
Be careful, more threads means more RAM needed to compile at a time. Be sure you don't make the travis job VM go out of memory.

Resources