How can I produce github annotations by creating report files on disk? - continuous-integration

I am trying to find a portable way to produce code annotations for GitHub in a way that would avoid a vendor-lockin.
Mainly I want to dump annotations inside a file (yaml, json,...) during build process and have a task at the end that does transform this file into github annotations.
The main goal here is to avoid hardcoding support for github-annotation into the tools that produce them, so other CI/CD systems could also consume the annotation-reports and display them in their UI.
linters -> annotations.report -> github-upload
Tools like flake8 are able to produce output in parsable format file:line:column: message, but I need to know if there is any attempt to standardize annotations so we can collect and combine them from multiple tools and feed them to the CI/CD engine.

Today I googled up what the heck those "Github Action Annotations" are all, and this was among the hits:
https://github.com/marketplace/actions/annotations-action
GitHub action for creating annotations from JSON file
As of now that page also contains:
This repository uses npm packages from #attest scope on github; we are working hard to open source these packages.
Annotations Action is not certified by GitHub. It is provided by a third-party and is governed by separate terms of service, privacy policy, and support documentation.
I didn't try it, again, just a random google hit.

I am currently using https://github.com/yuzutech/annotations-action
Sample action code:
- name: Annotate
uses: yuzutech/annotations-action#v0.3.0
with:
repo-token: ${{secrets.GITHUB_TOKEN}}
input: ./annotations.json
title: 'Findings'
ignore-missing-file: true
It does its job well but with one minor defect. If you have a findings on a commit/PR you get to see the finding with a beautiful annotation right where you need it. If you re-push changes, even if the finding persists, the annotation is not displayed on later commits. I have opened an issue but I have not yet received an answer.
The annotations-action mentioned above has not been updated and it does not work with me at all (deprecated calls).
I haven't found anything else that worked exactly as I wanted it to.
Update: I found that you can use reviewdog to annotate based on findings. I also created a GitHub action that can be used for Static Code Analysis here https://github.com/tsigouris007/action-semgrep-reviewdog. You can visit the entrypoint.sh file and check how I piped the custom output to reviewdog utilizing jq.

Related

How to plugin into grpc-java to modify code generation and add setNameOrClear(null) methods?

We are having too many issues around all this extra code for every database field with regard to
if(databaseObj.getName() != null)
builder.setName(databaseObj.getName());
and I read square wired into protobuf adding setOrClear methods in java. How do we do this when we generate as well using gradle?
We are using the gradle code from this page right now..
https://github.com/grpc/grpc-java
thanks,
Dean
You can accomplish that via protoc_insertion_points. When you generate the Java code you will see comments like // ##protoc_insertion_point(...). That is where the insertion will occur.
While appearing useful, this approach has serious drawbacks for .protos used in multiple projects. All projects using the same .proto and in the same language should use the same plugins, otherwise it causes the diamond dependency problem. This is why gRPC did not use this approach and instead generates its classes in separate files from the normal message generation. I strongly discourage against this approach, as it paints you into a corner and you don't know when you will need to "pay the piper."
To insert into a point, your plugin needs to run in the same protoc command-line invocation as the java builtin. Your plugin would then need to set CodeGeneratorResponse.file.insertion_point and content for each file you want to inject code.

Why we need `afterEach(cleanup);`?

This is question about unit test (jest + #testing-library/react)
Hi. I started using #nrwl/react these days.
This is amazing products and I'm excited monorepos project with nx.
Btw, there is afterEach(cleanup); in generated template test file.
This is my sample project.
https://github.com/tyankatsu0105/reproducibility-react-test-nx/blob/master/apps/client/src/app/app.spec.tsx#L7
However react-testing-library doesn't need cleanup when using jest.
https://testing-library.com/docs/react-testing-library/api#cleanup
Please note that this is done automatically if the testing framework you're using supports the afterEach global (like mocha, Jest, and Jasmine). If not, you will need to do manual cleanups after each test.
In fact, I see error when remove afterEach(cleanup); from test files.
Found multiple elements with the text:
thanks!

How to add a custom link in a multibranch Jenkins Pipeline

We are archiving a lot of html reports in a Jenkins pipeline (scripted Pipeline). These are accessible through a link "Last Successful Artifacts" on the job page as usual. But we would like to create an additional custom link that that points to one of these reports (that is being generated whether the build is successful or not).
I found the DocLink plugin, but it's not listed on the pipeline compatibility list and I'm not able to figure out how this eventually could be used in a pipeline.
The HTML Publisher Plugin is another one I was looking at. But it’s not suited for our use case, since it requires us to gather all reports and publish them again. It also puts all the content in an iframe, but all we need is link to one of the already archived html reports.
Here is a example to add summary links to a build
manager.createSummary("document.png").appendText("<a href='"+ pom.url + "'>View Maven Site</a>", false)
As that method accepts HTML and can be used for XSS you need to approve them.
https://jenkins.io/doc/book/managing/script-approval/
for more examples look here: https://wiki.jenkins.io/display/JENKINS/Groovy+Postbuild+Plugin
For pipeline the Badge plugin was extracted from the Groovy Postbuild plugin and if can create the summary using something like:
createSummary icon:'package.png', text: "<a href='$pom.url'>View Maven Site</a>"

Using "check" package causes another package to error

I'm using the Check package to validate parameters passed to Meteor methods. And I'm using Audit argument checks to enforce this.
However, I've added another package, Meteor Tags and when I try to use methods from the Tags package, I get a server error "Exception while invoking method '/patterns/addTag' Error: Did not check() all arguments during call to '/patterns/addTag'".
I think I understand why this error happens - the method in the Tags package doesn't check its inputs, so Audit Argument Checks generates an error. But I can't find any way around this, apart from 1) don't enforce checking, or 2) hack the Tags package methods so they use check. Neither of these seems like a great option - checking server parameters is a good idea, and hacking a package is not very maintainable.
Does anybody know if there is any smart way to use 'Audit argument checks' with packages that provide new server methods? I have looked at the Check documents, and searched online, but I haven't found an answer.
I hope this question makes sense.
Using audit-argument-checks is like saying: "I want to be serious about the security of the methods in my app." It's global to all methods in your app's codebase, including the methods from your installed packages.
There is no way to specify which parts of the app get checked, as that would be the equivalent of saying: "I want to be serious about the security of the methods I've written, but I don't care about the security holes created by some pacakges" (which doesn't make a lot of sense).
Note to package authors
Check your method arguments. It's not hard, and it prevents this situation from happening. Frankly, a package without this basic security really shouldn't be installed in the first place.
What you should do
Unless you have a throwaway app, I wouldn't recommend removing audit-argument-checks. Instead I'd do the following (assuming the package really has something of value):
Open an issue on github and let the maintainer know what's up.
Fork the code, and add the required checks. Keep this version as a local package.
Submit a pull request for the changes.
If all goes well, your PR will be accepted and everyone can benefit from the change. In the worst case, you'll still have a local copy that you can use in your app.

Get XML Reports in TeamCity from Google Test

I am trying to figure out how to run unit tests, using Google Test, and send the results to TeamCity.
I have run my tests, and output the results to an xml, using a command-line argument --gtest_output="xml:test_results.xml".
I am trying to get this xml to be read in TeamCity. I don't see how I can get XML Reports passed to TeamCity during build/run...
Except through XML report Processing:
I added XML Report Processing, added Google Test, then...
it asks me to specify monitoring rules, and I added the path to the xml file... I don't understand what monitoring rules are, or how to create them...
[Still, I can see nowhere in the generated xml, the fact that it intends to talk to TeamCity...]
In the log, I have:
Google Test report watcher
[13:06:03][Google Test report watcher] No reports found for paths:
[13:06:03][Google Test report watcher] C:\path\test_results.xml
[13:06:03]Publishing internal artifacts
And, of course, no report results.
Can anyone please direct me to a proper way to import the xml test results file into TeamCity ? Thank you so much !
Edit: is it possible that XML Report Processing only processes reports that were created during build ? (which Google Test doesn't do?) And is ignoring the previously generated reports, as "out of date", while simply saying that it can't find them - or are in the wrong format, or... however I should read the message above ?
I found a bug report that shows that xml reports that are not generated during the build are ignored, making a newbie like me believe that they may not be generated correctly.
Two simple solutions:
1) Create a post build script
2) Add a build step that calls the command line executable with the command-line argument. Example:
Add build step
Add build feature - XML report processing
I had similar problems getting it to work. This is how I got it working.
When you call your google test executable from the command line, prepend %teamcity.build.checkoutDir% to the name of your xml file to set the path to it like this:
--gtest_output=xml:%teamcity.build.checkoutDir%test_results.xml
Then when configuring your additional build features on the build steps page, add this line to your monitoring rules:
%teamcity.build.checkoutDir%test_results.xml
Now the paths match and are in the build directory.

Resources