disable automatic defect logging in ALM - ipaf

I am unable to handle the "defects" in ALM. How to disable the automatic "defects" logging in ALM?

To disable the defect logging, the following property needs to be set in init.properties file.
test.alm.run= true
If defect logging is disabled, the step level status will be updated as per execution in HP ALM and the test execution status for the test run will be logged as 'Not Completed'.
If test.alm.run=true, then testcase final status will not be updated automatically, if you want auto updation of status please keep test.alm.run= false.

Related

Sonarqube stage in jenkins pipeline fails with 403 error

I have a pipeline job which keeps failing on Sonar Quality Gate stage with the below error:
[Bitbucket] Build result notified
org.sonarqube.ws.client.HttpException: Error 403 on http://illinXXXX:XXXXX/api/qualitygates/project_status?analysisId=XXXXXXXX
at org.sonarqube.ws.client.BaseResponse.failIfNotSuccessful(BaseResponse.java:34)
What's more strange, another pipeline from the same MS, is passing that stage.
They both are using the same Sonarquebe user and token, and the same stage syntax.
Sonarqube version: 6.7.1 (build 35068).
Problem:
The execution user/group had permissions to run “Execute Analysis”
Solution:
Update the default template with permissions and push to projects (Bulk Apply Template)
UPDATE
I'm not entirely sure what arielma is saying in the comment marked as the answer, but! checking the box for execute analysis got me passed the 403 issue. Permissions are located under project settings (top right of page next to project information) select groups (near search bar and the All | Users buttons in the list header) then check the appropriate execute analysis check box.

Code won't execute when RESTlet status is changed to 'Released'

I've written a RESTlet that works in our Sandbox environment. However, when I move the code to production and change the status to 'Released' and the Debug Level to 'Error', my code disappears. Specifically, I have a user event that creates a button that executes a RESTlet. I can see the button when the status is 'Testing', but as soon as the status gets changed to 'Released' I can no longer see the button. I haven't found much information on this, and it seems like this could be permissions error, but our Admin is out and I can't change my own permissions - is there a workaround to this?
I've tried changing the logging level and status levels.
Have you set the Audience on the Script Deployment record? Once you Release the Script, it is only accessible to the Employees/Roles/Groups/etc specified in the Audience.

Where in the logs is the Status Message of the Background (async) workflow Stop step?

I have a background workflow that ends with a Stop step. This step has an optional Status Message attribute that I populate with some details I need to be logged.
After the workflow has run I can find the workflow Status Reason (="Succeeded") and other details in the corresponding System Job record. I would also expect to find the stored info in the Message field, but it's not there. I've tried static and dynamic Status Messages, but no success with neither.
Does anybody know where that message is stored?
Basically the Message in Details section of System Job is a placeholder for some useful error message/logging tracker to log with the help of ITracingService.Trace by developer or by platform when something breaks/exception captured.
The status reason is not the good place for logging success scenario message, rather for canceled scenario to pop custom message to user & rolling back the transaction.
Not sure why you want to store there, but better use a custom field or even a note (annotation) can help you in better way. Avoid storing successful workflow execution logs anyway.
Check in system jobs:
Also, uncheck this value:
Then, you could try running the workflow sync. I prefer this than waiting for the async service.

How to log errors when Play Evolutions fail?

I am just getting started with Play Evolutions and I find it pretty tough to figure out why they fail and leave the DB in an inconsistent state.
In Dev mode it will display the error in the default HTML page but it does not say which statement failed. This is also problematic since for this particular application I only have REST APIs that return JSON and so an HTML error is not appropriate. I have my own error handler so I will probably end up matching on ExceptionAttachment and pull out the content/script myself and escape that in the JSON error response. However this is only in DEV mode since I would not want this going back to a real user in PROD.
More frustrating is that it doesn't even log the statement when it fails. I can enable logging for my driver but once the failure has occurred it is too late to then go and enable logging.
Is there anyway to get a more specific error in the logs when an evolution fails?

Teamcity jmeter Performance Metrics Calculation: Check reference values

I have setup TeamCity with JMeter plugin. Under Build Configuration -> Build Features, I selected 'Performance metrics calculation'. I can see the build log is cumulative with previous execution results. However, when checking for failure conditions in the build log for 404 or 500 status code, it always fails if at least one previous instance has these response codes. Without this, the build always says Pass even if there are couple of requests that fail with error codes.
Under 'Check reference values', is it possible to set reference values to check the metrics against responsecode for errors? The only available options are 'Average', '90% line' and 'Max'. Any insight into how I can add options to select and search for error response codes?
Screenshots attached for reference
Thank you.
TeamCity JMeter Performance Metrics Collection
The plugin is open source so theoretically you should be able to add required metric to check.
As a workaround I can suggest using Response Assertion to check response codes. If you need to test only "200" status code - it will be the matter of only one assertion (same level as HTTP Request samplers).
See How to Use JMeter Assertions in Three Easy Steps article for more details on conditionally failing JMeter requests

Resources