VSTS anybody managed to use "secure file" in azure-pipelines.yml? - continuous-integration

I would like to use "download secure file" in a dev azure task - and that works exactly as expected within a task in "release pipeline" (in "Releases"). However, when I try to do the same in a "Builds" task in azure-pipelines.yml I get "file some-uuid not found".
From the official documentation I cannot find any difference if a custom task is used in "Builds" or "Releases" - it just refers to tasks no matter where it is being used -
Is there anything I can do to access one of my secure files from the library in an azure-pipelines.yml ("Builds" task) ?

I found the answer here:
https://github.com/Microsoft/azure-pipelines-agent/issues/1809
Under SOME circumstances Azure scans the 'azure-pipeline.yml' behind the scenes and when there is e.g. a request for a resource such as 'Library/secure file' or a 'service connection' is found then the appropriate permissions are set by Azure silently - so executing the build script won't run into an error.
BUT this scan for resources will NOT happen always e.g. on commit - only if the azure-pipeline.yml is created new or a variable is added or changed.
So normal editing such as write / commit of azure-pipeline.yml will not (re-)start such a scan - and if you add tasks that requires secure file or service connection later on you will experience an error saying 'file not found' or 'insufficient permissions'.
The easiest way to enforce a rescan with permission adjustment is to go to the variable tab and e.g. change the variable system.debug = false to true - or add a new variable foo = bar .
I was not able to find any of these hints or some background in the official docs - or not in a context which was helpful to relate to this problem - and as of this writing (Nov 2018) it is not clear if this is a bug or a feature - in any case it would be helpful if Microsoft could extend the Troubleshoot instructions behind this link https://aka.ms/yamlauthz which is included into the error message.
It seems that this scan-or-not-scan is specific to 'build' scripts 'azure-pipeline.yml' - that is why no such error appears in 'release' scripts.

Related

How can I produce github annotations by creating report files on disk?

I am trying to find a portable way to produce code annotations for GitHub in a way that would avoid a vendor-lockin.
Mainly I want to dump annotations inside a file (yaml, json,...) during build process and have a task at the end that does transform this file into github annotations.
The main goal here is to avoid hardcoding support for github-annotation into the tools that produce them, so other CI/CD systems could also consume the annotation-reports and display them in their UI.
linters -> annotations.report -> github-upload
Tools like flake8 are able to produce output in parsable format file:line:column: message, but I need to know if there is any attempt to standardize annotations so we can collect and combine them from multiple tools and feed them to the CI/CD engine.
Today I googled up what the heck those "Github Action Annotations" are all, and this was among the hits:
https://github.com/marketplace/actions/annotations-action
GitHub action for creating annotations from JSON file
As of now that page also contains:
This repository uses npm packages from #attest scope on github; we are working hard to open source these packages.
Annotations Action is not certified by GitHub. It is provided by a third-party and is governed by separate terms of service, privacy policy, and support documentation.
I didn't try it, again, just a random google hit.
I am currently using https://github.com/yuzutech/annotations-action
Sample action code:
- name: Annotate
uses: yuzutech/annotations-action#v0.3.0
with:
repo-token: ${{secrets.GITHUB_TOKEN}}
input: ./annotations.json
title: 'Findings'
ignore-missing-file: true
It does its job well but with one minor defect. If you have a findings on a commit/PR you get to see the finding with a beautiful annotation right where you need it. If you re-push changes, even if the finding persists, the annotation is not displayed on later commits. I have opened an issue but I have not yet received an answer.
The annotations-action mentioned above has not been updated and it does not work with me at all (deprecated calls).
I haven't found anything else that worked exactly as I wanted it to.
Update: I found that you can use reviewdog to annotate based on findings. I also created a GitHub action that can be used for Static Code Analysis here https://github.com/tsigouris007/action-semgrep-reviewdog. You can visit the entrypoint.sh file and check how I piped the custom output to reviewdog utilizing jq.

Google deployment manager runtime policy metadata

What is the difference between Google deployment manager UPDATE_ON_CHANGE and UPDATE_ALWAYS metadata runtime policy ? An example highlighting the difference would be very useful.
I searched through the documentation but could not find any useful references. There are a few hints on the github repository but they seem to be succinct and not verbose.
UPDATE_ALWAYS - call the API for create or update changes in the deployment
CREATE - only call on create
UPDATE_ON_CHANGE - call when the action changes
DELETE - call on deletes
This is the closest to a definition I could find.
Reference - https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/cloudbuild

Sonar Gerrit plugin not reporting results

We utilize the pipeline and after the build completes successfully we are running the following:
bat "mvn sonar:sonar -B -s ${buildSettings} -Dsonar.analysis.mode=preview -Dsonar.skipDesign=true -Dsonar.report.export.path=sonar-report.json"
sonarToGerrit(severity: 'Major', postScore: true, category: 'Code-Review', newIssuesOnly: true, issuesScore: '0', noIssuesScore: '0', changedLinesOnly: true)
The below build log shows that it found a good number of issues but yet the issues to be commented is 0.
Build log
Other posts suggest that it may not be finding the report but I don't believe it's the case as it found a number of issues. Any pipeline configuration advice would be much appreciated.
We are using Sonar Gerrit plugin version 2.2.1, Gerrit Trigger 2.27.3 and Jenkins Enterprise version is 2.60.3.1.
The most common case regarding to your question is related to the fact that SonarQube checks the whole project, regardless of amount of changes that were done in particular change. When creating a new report, it compares a result with a result that is stored to it's database (with previously found issues that were found in a mode other than preview). Thus sonar marks as new all the issues that it was not aware about before (and if you don't store this information in SonarQube at all - all the issues will be marked as new). But sonar-gerrit plugin can only post issues to the files that were affected by the change it is verifying. So, even if you set to "false" the settings "newIssuesOnly" and "changedLinesOnly", all the issues in files not affected by the change will be ignored.
Shortly, check that the issues marked with "isNew"="true" in your sonar report were actually changed (for changedLinesOnly=true) or located in the changed files (for changedLinesOnly = false) in the commit you are trying to check.
Another possible reason is the project configuration settings. If your files are part of a submodule, you'll need to include the submodule name to project base directories set. Or you may want to try a feature "allow auto match" instead. The feature tries to match SonarQube modules to Gerrit names automatically (available since 2.1).
Not related to your question advises regarding your pipeline code:
At the moment settings recognition for severity (and other enum values) is case-sensitive. In fact, the plugin ignores your "Major" setting as it cannot recognize it, and replaces it with default "INFO" value.
Another thing, I don't see why do you set your "postScore" to true as you set "issuesScore" = 0 and "noIssuesScore"=0. You can just set postScore=false and skip these settings along with "category" for sake of simplicity.
Also, if you use the version of plugin above 2.0, be aware that API has slightly changed and now uses the next structure:
sonarToGerrit (
reviewConfig: [
issueFilterConfig: [
severity: 'MAJOR',
newIssuesOnly: false,
changedLinesOnly: false
],
noIssuesTitleTemplate: 'Your text here',
someIssuesTitleTemplate: 'Your text here',
issueCommentTemplate: 'Your text here'
]
)
Though your code should also work (plugin does support previous versions), there is a bigger chance that there may be a bug.
I am also facing the same issue with Sonar-Gerrit jenkins plugin. Downloaded it from Jenkins plugins site.Using Sonar-Gerrit plugin 2.2.1, and analysing sonar scan against jenkins workspace.
For a sample,have changed just one file and provided the project base directory to the path of that file, and ran the sonar analysis in issues mode.
Issues are not loaded in Gerrit with logs
Report has loaded and contains 759 issues
Issues to be commented: 0
Issues to be involved in score calculation: 0
Review has been sent

Get XML Reports in TeamCity from Google Test

I am trying to figure out how to run unit tests, using Google Test, and send the results to TeamCity.
I have run my tests, and output the results to an xml, using a command-line argument --gtest_output="xml:test_results.xml".
I am trying to get this xml to be read in TeamCity. I don't see how I can get XML Reports passed to TeamCity during build/run...
Except through XML report Processing:
I added XML Report Processing, added Google Test, then...
it asks me to specify monitoring rules, and I added the path to the xml file... I don't understand what monitoring rules are, or how to create them...
[Still, I can see nowhere in the generated xml, the fact that it intends to talk to TeamCity...]
In the log, I have:
Google Test report watcher
[13:06:03][Google Test report watcher] No reports found for paths:
[13:06:03][Google Test report watcher] C:\path\test_results.xml
[13:06:03]Publishing internal artifacts
And, of course, no report results.
Can anyone please direct me to a proper way to import the xml test results file into TeamCity ? Thank you so much !
Edit: is it possible that XML Report Processing only processes reports that were created during build ? (which Google Test doesn't do?) And is ignoring the previously generated reports, as "out of date", while simply saying that it can't find them - or are in the wrong format, or... however I should read the message above ?
I found a bug report that shows that xml reports that are not generated during the build are ignored, making a newbie like me believe that they may not be generated correctly.
Two simple solutions:
1) Create a post build script
2) Add a build step that calls the command line executable with the command-line argument. Example:
Add build step
Add build feature - XML report processing
I had similar problems getting it to work. This is how I got it working.
When you call your google test executable from the command line, prepend %teamcity.build.checkoutDir% to the name of your xml file to set the path to it like this:
--gtest_output=xml:%teamcity.build.checkoutDir%test_results.xml
Then when configuring your additional build features on the build steps page, add this line to your monitoring rules:
%teamcity.build.checkoutDir%test_results.xml
Now the paths match and are in the build directory.

Trigger option to set specific build parameters?

I'm looking for a way to attach some specific build parameter to a scheduled trigger.
The idea is that we are continuously building debug versions of our products. Our nightly build has to be a release build, though. The build configurations for most of our projects is absolutely the same. It even has a configuration parameter, already. So all I would need is a trigger which allows for specifying an override for a single build parameter. That would cut the build configurations to maintain by half.
Is there a way to achieve this?
Not right now, you can follow this issue.
The approach I use is to create a "Deploy :: Dev D1 :: Run all integration tests" build. I then create a build trigger on each integration service build.
I create a parameter called "env:OctopusEnvironment" for integration service build. Set the value to be empty. I like to use prompt and display:
select display='prompt' label='OctopusEnvironment' data_13='Production' data_12='CI' data_11='Local - Hassan' data_10='Local - Mustafa' description='OctopusEnvironment' data_02='Test T1' data_01='Dev D1' data_04='Local - Taliesin' data_03='Continuous Deployment CI 1' data_06='Local - Paulius' data_05='Local - Ravi' data_08='Local - Venkata' data_07='Local - Marko' data_09='Local - Ivan'
In each integration service build I add this powershell step:
$octopusEnvironment = ($env:OctopusEnvironment).Trim()
Write-Host "Octopus environment = '$octopusEnvironment'"
if ($octopusEnvironment.Length -lt 1) {
Write-Host "Auto detecting octopus environment"
$trigger = '%teamcity.build.triggeredBy%' -split '::'
if ($trigger.Length -gt 2){
$environment = $trigger[1].Trim()
Write-Host "##teamcity[setParameter name='env.OctopusEnvironment' value='$environment']"
}
}
So now I can run the integration test via a trigger and when I run it directly it will prompt me on which environment to run integration test against.
I was stuck with the same problem and voted for the issue mentioned by Evgeny. One solution we thought, as mentioned sergiussergius, was to add a final step in the build-steps sequence to trigger manually the next build-configuration by passing custom-build parameters using the REST API. But in this case, we are loosing the build-chain information.
Using TeamCity 9.x, trying some stuff on the REST API, I could implement a solution who makes it possible to retrieve the triggering (ancestor) build and its parameters from the triggered (child) build.
The first thing we do is getting the current build using the environment variables set by TeamCity:
https://<host>/httpAuth/app/rest/builds/number:<env.BUILD_NUMBER>,buildType:(name:<env.TEAMCITY_BUILDCONF_NAME>,project:<env.TEAMCITY_PROJECT_NAME>)
In the response from the REST API, we have a /build/triggered tag which contains information about the trigger. It looks like this
<triggered type="unknown" details="##triggeredByBuildType='<triggering-build-configuration-internalId>' triggeredByBuild='<triggering-build-number>'" date="20160105T190642+0700"/>
The looks like btxxx for us.
From it, we can access the triggering-build (ancestor) using the following request to the REST API:
https://<host>/httpAuth/app/rest/builds/number:<triggering-build-number>'4,buildType:(internalId:<triggering-build-configuration-internalId>1,project:name:<env.TEAMCITY_PROJECT_NAME>)
from the response, we can get the ancestor-build's parameters values, and set it in the current build using:
echo "##teamcity[setParameter name='env.ENV_AAA' value='aaaaaaaaaa']")
Notes:
this post reference TeamCity version 7.X. I did this using TeamCity version 9.X, and could not try it with a previous version. I don't know if the REST API calls mentioned in my post are similar in the previous versions.
In this solution, the ancestor's build-configuration (the one who trigger the build) and the child's build-configuration (the one triggered) are in the same project. I did not do the test using build-configurations in 2 different projects: I would expect the "trigger" tag to provide information about the ancestor's project. It would be nice if someone could do the test.
I hope this solution may help!
This is not a general solution, but in certain cases (for example if you want to determine whether the build was started by a schedule trigger or some other method), a workaround is to examine the predefined parameter teamcity.build.triggeredBy.
This parameter is set to the same string that is shown on the build's overview page next to the label "Triggered by:". For example, "Schedule Trigger", "Git", or a user's full name. (There is also a teamcity.build.triggeredBy.username parameter, but it is only set in the latter case).
The limitation of this approach is that you cannot, for example, distinguish between two separate schedule triggers defined for the same build configuration. But in that case you could resort to examining the current time as well.
I added request to last build step
curl -i -u "%login%:%pass%" -H "Content-type: text/plain" -X PUT -d "v1" http://tc.server/httpAuth/app/rest/buildTypes/id:%buildConfigurationId%/parameters/env.%SOME_PARAMETER%
http://confluence.jetbrains.com/display/TCD8/REST+API

Resources