I am interested to publish jmeter report in Jenkins and set the pipeline as "failed" if there are degradation in terms of performances between the previous executions.
Below the configuration set in my Jenkinsfile for my jmeter results:
perfReport filterRegex: '',
relativeFailedThresholdNegative: 0,
relativeFailedThresholdPositive: 0,
relativeUnstableThresholdNegative: 0,
relativeUnstableThresholdPositive: 0,
sourceDataFiles: 'resultsJmeter/output/*.xml'
}
Is there any way to evaluate automatically the previous executions (if any) ?
As per Performance Trend Reporting article:
You can configure the error percentage thresholds and the relative percentage thresholds which would make the project unstable or failed or set them to -1 to disable the feature
As per How to Use the Jenkins Performance Plugin
Set the following values:
Unstable: 10
Failed: 60
This configuration will mark the build as unstable when errors are at 10% and as failed when 60% of our requests failed.
The relevant pipeline syntax would be:
perfReport errorFailedThreshold: 60, errorUnstableThreshold: 10, filterRegex: '', sourceDataFiles: 'resultsJmeter/output/*.xml'
for the relative to the previous build:
perfReport relativeFailedThresholdNegative: 10, relativeFailedThresholdPositive: 60, relativeUnstableThresholdNegative: 5, relativeUnstableThresholdPositive: 30, sourceDataFiles: 'resultsJmeter/output/*.xml', filterRegex: ''
Related
I have a TeamCity step that runs this script:
"teamcity:eslint": eslint src/**/*.{ts,tsx} --format ./node_modules/eslint-teamcity/index.js --max-warnings 0
It uses the eslint-teamcity to format the error/warnings linting result.
This is the package.json configuration:
"eslint-teamcity": {
"reporter": "inspections",
"report-name": "ESLint Violations",
"error-statistics-name": "ESLint Error Count",
"warning-statistics-name": "ESLint Warning Count"
},
I created a test "master" branch with 2 lint warnings and TeamCity "Inspections" shows them:
I have set this Failure Condition:
Now, to test it I created a branch with 3 or 4 lint warnings.
I commit it but the build does not fail despite the number of warnings has increased:
I expect the build to fail.
I've no idea how and where TeamCity store the "inspection" warnings counter for that Failure Condition, so I have no idea how to investigate this unexpected behaviour.
Or, I missed some step/configuration?
TeamCity 2019.2
Failuer Condition code:
failOnMetricChange {
metric = BuildFailureOnMetric.MetricType.INSPECTION_WARN_COUNT
units = BuildFailureOnMetric.MetricUnit.DEFAULT_UNIT
comparison = BuildFailureOnMetric.MetricComparison.MORE
compareTo = build {
buildRule = buildWithTag {
tag = "test-master"
}
}
stopBuildOnFailure = true
}
I took as example of the Failure Condition another TeamCity project that was checking the tests coverage.
I gave for grant (didn't pay attention) that tag: master in the Failure Condition was actually looking at "master" branch for reference.
At the end of the log (many other steps after my try) I finally saw this warning:
Cannot find Latest build with tag: 'test-master', branch filter: feature/test-warnings to calculate metric 'number of inspection warnings' for branch feature/test-warnings
Still not sure if this means also that the comparison cannot be done on another branch, but this answer my question about Failure Condition not working as expected.
I am running jmeter test using jenkins pipeline and performance plugin .
but build is getting failed due to below message
TG01_TS01_RTPSR_WB_HR24_NavigateThroughWebsiteGroups - JSR223_Reset Variables for next iteration 0 1 192233720368547760.00%
The label "TG01_TS01_RTPSR_WB_HR24_NavigateThroughWebsiteGroups - JSR223_Reset Variables for next iteration" caused the build to fail
and my jenkins pipeline parameters are
perfReport compareBuildPrevious: true, filterRegex: '', ignoreFailedBuilds: true, ignoreUnstableBuilds: true, modeOfThreshold: true,relativeFailedThresholdPositive: 80.0, relativeUnstableThresholdPositive: 80.0, sourceDataFiles: "/Results/${params.reportName}.csv"
My expectation is that you have a JSR223 Sampler somewhere in your script which is doing some helper stuff and the current build is failing because this sampler response time was higher than the threshold you configured.
I don't think you should include the metrics for this JSR223 Sampler into your reports, just add the next line somewhere:
SampleResult.setIgnore()
Demo:
More information:
SampleResult.setIgnore()
Top 8 JMeter Java Classes You Should Be Using with Groovy
In quite a few cases, I noticed that the junit test output was truncated .
e.g. https://builds.apache.org/job/HBase-2.0-hadoop3-tests/org.apache.hbase$hbase-server/218/testReport/junit/org.apache.hadoop.hbase.master.procedure/TestDisableTableProcedure/org_apache_hadoop_hbase_master_procedure_TestDisableTableProcedure/ :
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.jav
...[truncated 1107895 chars]...
r$Handler.run(Server.java:2661)
If someone has seen this before, please advise whether there is any config which controls the truncation.
I'm not sure where you have looked what about this: https://builds.apache.org/job/HBase-2.0-hadoop3-tests/org.apache.hbase$hbase-server/218/testReport/junit/org.apache.hadoop.hbase.master.procedure/TestDisableTableProcedure/org_apache_hadoop_hbase_master_procedure_TestDisableTableProcedure/ if you take a look a line before there you can see: ...[truncated 1107895 chars].......this means about 1 MiB has been truncated ...which is done as far as I know by Jenkins...
Furthermore on the usual console log you can see things like this:
[ERROR] Tests run: 5, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 763.13 s <<< FAILURE! - in org.apache.hadoop.hbase.master.procedure.TestDisableTableProcedure
[ERROR] org.apache.hadoop.hbase.master.procedure.TestDisableTableProcedure Time elapsed: 749.128 s <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 780 seconds
If you have the need to investigate that more in depth you need to stop the build system and take a look into the workspace of the build and take a look on the surefire reports directory...
Apart from that it makes no sense to start such build by a timer trigger...better you commit trigger instead...
I'm trying to set larger resolution for my tests because if the resolution is under 414 it goes to mobile page. And every time I run my tests it fails because of the resolution. I tried to set higher resolution but Jenkins didn't accept it. No matter what site I try I got the same results. I run Jenkins in combination with Selenium and Maven. This is the code I'm using in Selenium:
WebDriver driver = new ChromeDriver();
System.out.println(driver.manage().window().getSize());
driver.get("https://www.apple.com/");
driver.manage().window().setSize(new Dimension(800, 600));
System.out.println(driver.manage().window().getSize());
I also tried with driver.manage().window().maximize(); which gives also the result of 272. This is what I get on Jenkins console output:
Running GitProject.gittest.AppTest
Configuring TestNG with:
org.apache.maven.surefire.testng.conf.TestNG652Configurator#5f8ed237
Starting ChromeDriver 2.33.506106
(8a06c39c4582fbfbab6966dbb1c38a9173bfb1a2) on port 20546
Only local connections are allowed.
Oct 16, 2017 9:21:34 AM org.openqa.selenium.remote.ProtocolHandshake
createSession
INFO: Detected dialect: OSS
(400, 272)
(800, 272)
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.885 sec
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
I am running docker containers on mesos / marathon. I wanted to implement health checks, basically want to run a health check script. My question is, will the health check command be run on the container itself or does it run on the slave? It probably is container level since this is per application health check, so kind of obvious, but I would like to confirm it. Didn't find any relevant documentation that says where it is run.
Thanks
I did try an echo to /tmp/testfile via the command, which I see on the slave. This means it runs on the slave? Just need confirmation. Any more information is useful
The short answer is: it depends. Long answer below : ).
Command heath checks are run by the Mesos docker executor in your task container via docker exec. If you run your containers using the "unified containerizer", i.e., in case of docker containers without docker daemon, things are similar, with the difference there is no docker exec and Mesos executor simply enters the mnt namespace of your container before executing the command health check (see this doc). HTTP and TCP health checks are run by the Marathon scheduler hence not necessarily on the node where your container is running (unless you run Marathon at the same node with Mesos agent, which is probably you should not be doing). Check out this page.
Now starting with Mesos 1.2.0 and Marathon 1.3, there is a possibility to run so-called Mesos-native health checks. In this case, both HTTP(S) and TCP health checks run on the agent where your container is running. To make sure the container network can be reached, these checks enter the net namespace of your container.
Mesos-level health checks (MESOS_HTTP, MESOS_HTTPS, MESOS_TCP, and COMMAND) are locally executed by Mesos on the agent running the corresponding task and thus test reachability from the Mesos executor. Mesos-level health checks offer the following advantages over Marathon-level health checks:
Mesos-level health checks are performed as close to the task as possible, so they are are not affected by networking failures.
Mesos-level health checks are delegated to the agents running the tasks, so the number of tasks that can be checked can scale horizontally with the number of agents in the cluster.
Limitations and considerations
Mesos-level health checks consume extra resources on the agents; moreover, there is some overhead for fork-execing a process and entering the tasks’ namespaces every time a task is checked.
The health check processes share resources with the task that they check. Your application definition must account for the extra resources consumed by the health checks.
Mesos-level health checks require tasks to listen on the container’s loopback interface in addition to whatever interface they require. If you run a service in production, you will want to make sure that the users can reach it.
Marathon currently does NOT support the combination of Mesos and Marathon level health checks.
Example usage
HTTP:
{
"path": "/api/health",
"portIndex": 0,
"protocol": "HTTP",
"gracePeriodSeconds": 300,
"intervalSeconds": 60,
"timeoutSeconds": 20,
"maxConsecutiveFailures": 3,
"ignoreHttp1xx": false
}
or Mesos HTTP:
{
"path": "/api/health",
"portIndex": 0,
"protocol": "MESOS_HTTP",
"gracePeriodSeconds": 300,
"intervalSeconds": 60,
"timeoutSeconds": 20,
"maxConsecutiveFailures": 3
}
or secure HTTP:
{
"path": "/api/health",
"portIndex": 0,
"protocol": "HTTPS",
"gracePeriodSeconds": 300,
"intervalSeconds": 60,
"timeoutSeconds": 20,
"maxConsecutiveFailures": 3,
"ignoreHttp1xx": false
}
Note: HTTPS health checks do not verify the SSL certificate.
or TCP:
{
"portIndex": 0,
"protocol": "TCP",
"gracePeriodSeconds": 300,
"intervalSeconds": 60,
"timeoutSeconds": 20,
"maxConsecutiveFailures": 0
}
or COMMAND:
{
"protocol": "COMMAND",
"command": { "value": "curl -f -X GET http://$HOST:$PORT0/health" },
"gracePeriodSeconds": 300,
"intervalSeconds": 60,
"timeoutSeconds": 20,
"maxConsecutiveFailures": 3
}
{
"protocol": "COMMAND",
"command": { "value": "/bin/bash -c \\\"</dev/tcp/$HOST/$PORT0\\\"" }
}
Further Information: https://mesosphere.github.io/marathon/docs/health-checks.html