Selecting Quality Gate for SonarQube Analysis in Jenkinsfile - sonarqube

I have a Jenkinsfile that, among other things, performs SonarQube analysis on my build and passes it through 'Quality Gate' stage. The analysis is placed on the SonarQube server where I can see all the details. The relevant pieces of code for the analysis and Quality gate are below (not mine, it is from documentation):
stage('SonarCloud') {
steps {
withSonarQubeEnv('SonarQube') {
sh 'mvn clean package sonar:sonar '
}
}
}
stage("Quality Gate") {
steps {
timeout(time: 15, unit: 'MINUTES') { // If analysis takes longer than indicated time, then build will be aborted
waitForQualityGate abortPipeline: true
script{
def qg = waitForQualityGate() // Waiting for analysis to be completed
if(qg.status != 'OK'){ // If quality gate was not met, then present error
error "Pipeline aborted due to quality gate failure: ${qg.status}"
}
}
}
}
}
Currently, once the analysis is completed and placed on the server, it uses server's default quality gate. I wonder if I can specify which quality gate to use with my analysis, before proceeding to 'Quality gate' stage? (I have another quality gate set up, with different acceptance criteria, that I would like to use for 'Quality Gate' stage).
Altering the default quality gate cannot be done because other people are using it (hence why I have my own quality gate set up).
I have looked into 'ceTaskUrl' link, that can be found in report-task.txt file, but didn't get far with it (no variable that I can see, and use, to select quality gate).
I also found this Jenkinsfile. I tried to use some of its code, with additional googling on top of it, in hopes to access and alter quality gate, but also didn't get far with it.
It is worth mentioning that I do not have admin privileges at the SonarQube server I am using. However, I can request for a new quality gate to be configured as required, in case it is needed.

You can do so using the WebAPI but for that you need Administer Quality Gate permission.
Please find more details here in this answer.
How to assign Quality Gate dynamically to project from the script [SonarQube 6.5]?
Or in case you don’t get the proper permission, then the alternate way is to use sonarqube UI, where you can specify which Quality Gate should be used for which project.

Related

Increase Ganach Gas Limit from Graphic Interface

I am strugguling with Ganache, I have some big tests that I want to run but it is saying :
"X ran out of gas. Something in the constructor (ex: infinite loop) caused gas estimation to fail. Try:
Making your contract constructor more efficient
Setting the gas manually in your config or as a deployment parameter
Using the solc optimizer settings in 'truffle-config.js'
Setting a higher network block limit if you are on a
private network or test client (like ganache)."
I already increased to max gas limit in my file truffle-config.js but it is not enough because limited to 6721975.
I saw some people talking about $ganache -cli -l 30000000 but I dont have ganache in command line.
My question is : How to change this Value ?
Actually, you can't increase the gas limit, but what you can do is create a new workspace and in the initialization parameter set a higher gas limit in the "chain" module.
Please note that, for the latest truffle version, The truffle-config.js is slightly updated in the compilers section, need to add settings attributes for optimizer. Otherwise, the optimizer will not be enabled and you will easily get the following errors:
"YOUR CONTRACT" ran out of gas. Something in the constructor (ex: infinite loop) caused gas estimation to fail. Try:
* Making your contract constructor more efficient
* Setting the gas manually in your config or as a deployment parameter
* Using the solc optimizer settings in 'truffle-config.js'
* Setting a higher network block limit if you are on a
private network or test client (like ganache).
In the following is my truffle-config.js setting for your references:
module.exports = {
// ...Please set your own contracts_directory and contracts_build_directory
networks: {
development: {
host: "127.0.0.1",
port: 8545,
network_id: 5777, //Ganache
}
},
compilers: {
solc: {
version: "0.8.4",
settings: { //need to add "settings" before "optimizer" for latest truffle version
optimizer: {
enabled: true, // enable the optimizer
runs: 200,
},
evmVersion: "berlin",
},
},
},
}

How can I execute a specific feature file before the execution of remaining feature files in goDog?

I have a few data setup before implementing the remaining test cases. I have grouped all the data setup required to be executed before the execution of test cases in a single feature file.
How can I make sure that this data setup feature file is executed before executing any other feature file in goDog framework?
As I understand your question, you're looking for a way to run some setup instructions prior to running your feature/scenario's. The problem is that scenario's and features are, by design, isolated. The way to ensure that something is executed prior to the scenario running is by defining a Background section. AFAIK you can't apply the same background across features. Scenario's are grouped per feature, and each feature can specify a Background that is executed before each scenario. I'd just copy-paste your setup stuff everywhere you need it:
Background:
Given I have the base data:
| User | Status | other fields |
| Foo | Active | ... |
| Bar | Disabled | ... |
If there's a ton of steps involved in your setup, you can define a single step that you expand to run all the "background" steps like so:
Scenario: test something
Given my test setup runs
Then implement the my test setup runs like so:
s.Step(`^my test setup runs$`, func() godog.Steps {
return godog.Steps{
"user test data is loaded",
"other things are set up",
"additional data is updated",
"update existing records",
"setup was successful",
}
})
That ought to work.
Of course, to avoid having to start each scenario with that Given my test setup runs, you can just start each feature file with:
Background:
Given my test setup runs
That will ensure the setup is performed before each scenario. The upshot will be: 2 additional lines at the start of each feature file, and you're all set to go.

How to match number of Test run in maven log with source code?

I am trying to link the number of Test run founded on the log file in https://api.travis-ci.org/v3/job/29350712/log.txt of the project presto of facebook with the real test in source code.
The source code linked to this run of the build is located in the following link: https://github.com/prestodb/presto/tree/bd6f5ff8076c3fdb2424406cb5a10f506c7bfbeb/presto-raptor/src/test/java/com/facebook/presto/raptor
I am computing the number of places where I encounter '#Test' in the source code then it should be the same number of Test run in the log file.
In most of cases it works. But there is some of them like the subproject 'presto-raptor' where there is 329 Tests run. But in the source code I found 27 time the #Test.
I notice that there is some test preceded by: #Test(singleThreaded = true)
This is an example in the following link:
https://github.com/prestodb/presto/blob/bd6f5ff8076c3fdb2424406cb5a10f506c7bfbeb/presto-raptor/src/test/java/com/facebook/presto/raptor/metadata/TestRaptorSplitManager.java
#Test(singleThreaded = true)
public class TestRaptorSplitManager
{
I expected to have the same number of Test run as in the log file. But It seems that the source code is running in parallel (multi-thread)
My question here is how do I match the number 329 Tests run with real test cases in source code.
TestNG counts the number of tests based on the following (apart from the regular way of counting tests)
Data driven tests are counted as new tests. So if you have a #Test that is powered by a data provider (and lets say the data provider gives 5 sets of data), then to TestNG, there were 5 tests that were run.
Tests with multiple invocation counts are also counted as individual tests (so for e.g., if you have #Test(invocationCount = 5), then TestNG would report this test as 5 tests in the reports, which is what Maven console is showing as well.
So not sure how it would be possible for you to build a matching capability that would cross check this against the source code (especially when your tests involve a data provider)

How do I add the job status (success or failure) in the email notification?

I want to add the job status in the subject line of the email notification of Octopus Deploy. Can you please tell me the system variable to use or another way to add the status?
As a work-around you could use two steps for sending the 'status-email':
One step sending 'success'-email. This step should have its 'Run condition' set to 'Success: only run when previous steps are successfull'.
Another step sending 'failure'-email. This step should have 'Run condition' set to 'Failure: only run when a previous step failed'.
Perhaps the system-variables Octopus.Deployment.Error
Octopus.Deployment.ErrorDetail could also be helpfull.
Tracking deployment status
During deployment, Octopus provides variables describing the status of each step.
Where S is the step name, Octopus will set:
Octopus.Step[S].Status.Code
Octopus.Step[S].Status.Error
Octopus.Step[S].Status.ErrorDetail
Status codes include Pending, Skipped, Abandoned, Cancelled, Running, Succeeded and Failed.
Source: http://docs.octopusdeploy.com/display/OD/System+variables#Systemvariables-DeploymentStatusTrackingdeploymentstatus
So to apply this to your email subject (assuming you're using the inbuilt Send Email step:
FYI: the box circled allows you quick access to the variables list.
You probably want to tweak the value to be closer to this, though
Deployment Status = #{Octopus.Step[Other Step Name].Status.Code}
As an extension to this answer; you can iterate over all steps and output their status, I guess.
Syntax here: http://docs.octopusdeploy.com/display/OD/Variable+Substitution+Syntax#VariableSubstitutionSyntax-Repetition (look for the Repetition heading)
Write-Host "Deployment Steps:"
#{each step in Octopus.Step}
Write-Host "- StepName=#{step}; Status=#{step.Status.Code};"
#{/each}
Example output
Deployment Steps:
StepName=FirstStep; Status=Succeeded;
StepName=ThisStep; Status=Running;
StepName=YetToBeRun; Status=Pending;
You could use variable expressions and deployment error variable to achieve this in email subject field. For example:
State of deployment: #{unless Octopus.Deployment.Error}Success#{/unless} #{if Octopus.Deployment.Error}Failure#{/if}

Displaying number of tests which failed with a certain tag ( Ruby, Cucumber, in Jenkins)

I'm considering adding #high_priority and #low_priority to certain tests in our test suite in order to find out how many high priority (risk) tests have failed. Ideally I'd like a column in Jenkins next to the test job which displays
1/100 high priority and 8/60 low priority tests failed.
Though I'm happy with a similar output in the console output if necessary.
Currently Jenkins jobs are running a command line execution like:
cucumber --tags #AU_smoke ENVIRONMENT=beta --format html --out 'C:\git\testingworkspace\Reports\smoke_BETA_test_report.html' --format pretty
Edit:
Adding extra jobs isn't really a solution, we have a large amount of jobs which run subsets of all of the tests, so adding extra jobs for high and low priority would require tripling the number of jobs we have.
I've settled on using the description setter plugin with the extra columns plugin. This allows me to add the build description as a column on my views and in my code I have
After do |scenario|
if scenario.status.to_s=="passed"
$passed=$passed+1
elsif scenario.status.to_s=="failed"
$failed=$failed+1
puts "FAILED!"
elsif scenario.status.to_s=="undefined"
$undefined=$undefined+1
end
$scenario_count=$scenario_count+1
if scenario.failed?
Dir::mkdir('screenshots') if not File.directory?('screenshots')
screenshot = "./screenshots/FAILED_#{scenario.name.gsub(' ','_').gsub(/[^0-9A- Za-z_]/, '')}.png"
#browser.driver.save_screenshot(screenshot)
puts "Screenshot created: #{screenshot}"
embed screenshot, 'image/png'
##browser.close
end
##browser.close
end
at_exit do
end_time=Time.now
elapsed_time=end_time.to_i - $start_time.to_i
puts "\#description#scenarios total: #{$scenario_count}, passed: #{$passed}, failed: #{$failed}, known bug fails: #{$known_bug_failures}, undefined: #{$undefined}.#description#"
...
Then in the description setter plugin I use the regex
/#description#(.+)#description#/
and use the first match group as the build description name. This also allows me to look at a job's build history and see at a glance how many tests there were and how many passed over the previous few weeks.

Resources