How long it take for coverity analysis after travis build success? - static-analysis

https://github.com/rajeshgopu/coverity_test
It Shows pending from long time in gihub project status.
Does the travis script has issues .or i have to wait to get the result from coverity?

It was not trigeering due to missing "build_command_prepend" in trvis script & indentation in script. Once made as sample project it worked.
Commit link : https://github.com/rajeshgopu/coverity_test/commit/5c373431de93b7b36e75dcf75037f0b947903708

Related

`go test` only prints "open : no such file or directory"

I rewrite a program and just removed a lot of code, by just making it a comment. After doing that and adding some tests, it is impossible to run the program anymore.
when running go build it has no errors at all.
But when running go test i only become some weird output:
$ go test
2020/05/05 19:14:24 open : no such file or directory
exit status 1
FAIL fwew_lib 0.002s
This error occurs, before a single test is even run, so within the test framework itself.
Why is there is no file specified that is not found? Any idea, what caused this error and how to fix it?
This error also occurred on multiple machines with windows and linux. And with go 1.14.2 and go 1.13.7.
To get this error yourself:
Repo: https://github.com/knoxfighter/fwew/tree/library
Branch: library
Just download the branch and run go test
Your fork is missing this line from the parent
texts["dictionary"] = filepath.Join(texts["dataDir"], "dictionary.txt")
link
But your fork still has this line which depends on the one mentioned above
Version.DictBuild = SHA1Hash(texts["dictionary"])
link
And so the SHA1Hash "fatals" out since you're essentially passing it an empty string.
link

How to detect compiler warnings in gitlab CI

In the steps of setting up CI builds on our gitlab server, I can't seem to find information on how to set up the detection of compiler warnings. Example build output:
[100%] Building CXX object somefile.cpp.o
/home/gitlab-runner/builds/XXXXXXX/0/group/project/src/somefile.cpp:14:2: warning: #warning ("This is a warning to test gitlab") [-Wcpp]
#warning("This is a warning to test gitlab")
^
However the build result is success instead of warning or something similar. Ideally the results wuold also be visible on the merge request on the feature (and block the merge if possible).
I can't imagine I'm the only one trying to achieve this, so I am probably looking in the wrong direction. The 'best' solution I found is to somehow manually parse the build output and generate a JUnit report.
How would I go about doing this without allowing the build job to fail, since I would like the job to fail when compiler errors occur.
Update
For anyone stumbling across this question later, and in lieu of a best practice, this is how I solved it:
stages:
- build
- check-warnings
shellinspector:
stage: build
script:
- cmake -Bcmake-build -S.
- make -C cmake-build > >(tee make.output) 2> >(tee make.error)
artifacts:
paths:
- make.output
- make.error
expire_in: 1 week
analyse build:
stage: check-warnings
script:
- "if [[ $(cat make.error | grep warning -i) ]]; then cat make.error; exit 1; fi"
allow_failure: true
This stores the build output errors in make.error in the first stage, the next stage then queries that file for warnings and fails that stage with allow_failure: true to create the passed with warning pipeline status I was looking for.
It seems that the solution to such need (e.g., see the issue "Add new CI state: has-warnings" https://gitlab.com/gitlab-org/gitlab-runner/issues/1224) has been to introduce the allow_failure option so that one job can be the compilation itself, which is not allowed to fail (if it does, then the pipeline fails) and another job can be the detection of such warnings which is allowed to fail (if one is found then the pipeline will not fail).
Also the possibility of defining warning regex in the .gitlab-ci.yml has been requested but it does not exist yet.

Travis-ci boost log compilation with biicode time-out

I am using travis-ci and biicode to build my project who is depending on boost log. But boost log times are longer than 10 min so I get this message:
No output has been received in the last 10 minutes, this potentially indicates a
stalled build or something wrong with the build itself.
The build has been terminated
The build is working correctly, it's just that boost log is really long to compile with limited resources (I tried to compile it on a VM with 1 CPU and 2GB of RAM and it took almost more than 15 min)
I know this is happening because there is not enough verbose going on so I tried the following flags:
>bii cpp:build -- VERBOSE=1
In the CMakeList.txt, set BII_BOOST_VERBOSE ON as mentionnened here
Set BOOST_LOG_COMPILE_FAST_ON as explained here
Using travis_wait
Actually travis_wait seems to be the correct solution but when I put it in my .travis.yml like this
script: travis_wait bii cpp:build
It does actually doesn't output logs like usually and just time out after 20 min. I don't think the actual building is taking place
What is the correct way to handle this problem?
This is a known issue, Boost.Log takes a long time to compile.
You can use travis_wait to call bii cpp:configure, but I'm with you, I need log feedback (No pun intended). However, I have tried that too and leaded to >50min build, which means travis aborts build on free accounts :( Of course my repo does not build Boost.Log only.
Just as a note, here's part of the settings.py file from the boost-biicode repo:
#Boost.Log takes so much time to compile, leads to timeouts on Travis CI
#It was tested on Windows and linux, works 'ok' (Be careful with linking settings)
if args.ci: del packages['examples/boost-log']
I'm currently working on a solution, launching asynchronous builds while printing progress. Check this issue. It will be ready for this week :)
To speed-up your build, try to play with BII_BOOST_BUILD_J variable to set the number of threads you want for building Boost components. Here's an example:
script:
- bii cpp:configure -DBII_BOOST_BUILD_J=4
Be careful, more threads means more RAM needed to compile at a time. Be sure you don't make the travis job VM go out of memory.

Coverity and "Your request for analysis of Phonations/TravisTest failed"

I'm trying to add Coverity Scan static analysis to my Qt Mac project but I'm not able to submit the build using travis.
Here is my coverity specific travis setup:
addons:
coverity_scan:
project:
name: Phonations/TravisTest
description: Build submitted via Travis CI
notification_email: martin#phonations.com
build_command_prepend: cov-configure --comptype clangcxx --compiler clang++ --template
build_command_prepend: qmake QtTest.pro
build_command: make -j 4
branch_pattern: coverity
And here is the result I got by mail:
Your request for analysis of Phonations/TravisTest is failed.
Analysis status: Failure
Please fix the error and upload the build again.
Error details:
Build uploaded has not been compiled fully. Please fix any compilation error. You may have to run bin/cov-configure as described in the article on Coverity Community. Last few lines of cov-int/build-log.txt should indicate 85% or more compilation units ready for analysis
For more detail explanation on the error, please check: https://communities.coverity.com/message/4820
If your build process isn't going smoothly, email us at scan-admin#coverity.com
with your cov-int/build-log.txt file attached for assistance, or post your issue
to the Coverity Community at https://communities.coverity.com/community/scan-(open-source)/content
There is not really MacOS specific explanation in the documentation. Anyone has an idea how to submit it?

Gradle TaskOutputs.upToDateWhen timing out?

I just tweaked my Gradle build to add an upToDateWhen closure which examines the lastModified time of a bunch of the tasks output files. (It's a scan of roughly 200-300 files). It runs just fine locally but when I pushed it to my Jenkins build server (which runs a lot slower than my local machine) the closure appeared to return true but the task still ran. I saw this in the output:
myTask task is 0 days old...
Executing task ':myTask' (up-to-date check took 0.092 secs) due to:
No history is available.
followed immediately by output that is generated when the task is run. The 1st line is a println I stuck in the upToDateWhen closure to help me see what it evaluates to. Basically if the latest modified file is less than 1 day old I consider the task up to date. The zero means it should be returning true. (I just updated the build to print the return value and will try to run it in a few.) While I initially thought it was just returning the wrong value I double checked and noticed the ABSENCE of this which shows in the output when run locally:
myTask task is 0 days old...
:myTask UP-TO-DATE
:myNextTask
So I'm questioning whether the upToDateWhen closure is being timed-out on the build server and defaulting to false. Is there such logic in Gradle to timeout an upToDateClosure and assume false if it takes too long or am I reading/wishing too much into this?
No there is no timeout functionality in the upToDate when closure.
run your build with -i to get info logging enabled. This give's you more information why a task is considered up-to-date.

Resources