Veracode scan: How to get the total expected "Effort to Fix" for all the issues found in a scan? - veracode

I am analysing the Veracode scan reports for some applications. I have the PDF reports. These do not have the total "Effort to Fix" for all the issues found.
Can someone guide me to get it?

Related

SonarQube report showing a different coverage than Gitlab RSpec (Ruby) report

Just as the title implies we're dealing with an "issue" regarding the information shown in SonarQube for our Ruby Code Coverage when we compare it to the Gitlab Pipeline job execution given that in the Pipe execution it's showing
as you can see it shows 91.21% coverage/covered
While in SonarQube we're getting the following result:
Which is weird given that they are both supposedly taking reading the same report
We're using SimpleCov 0.21.1 and only made some minor adjustment to the report in order to avoid using the "branches" feature that came after simplecov 0.18+.
Maybe those are 2 different values and not related at all and I'm getting it all wrong? Any help or guidance would be greatly appreciated :')
Thanks a lot in advance and happy holidays for everyone !

Are the latest Jepsen findings for ElasticSearch valid for the latest versions?

There are some issues found by Jepsen in ElasticSearch years ago
https://aphyr.com/posts/323-call-me-maybe-elasticsearch-1-5-0
Any ideas if they are still actual in recent Elastic versions?
Update: Elastic has a special approach to work on tickets by closing them without fix, linking duplicates with high nesting level and then closing them without fix or with partial fix. It's hard to trace if actual issue is really fixed or not.
For example this ticket and all related.
With this approach it's very hard to understand what is fixed. I will keep this question open just in case somebody will be able to trace Elastic fixes of Jepsen issues.
No, those findings are a long way out of date. There is a page in the Elasticsearch manual that gives a good summary of the status of the issues found in that blog post and elsewhere.

How to analyze jmeter dashboard report,any open source tool or framework available which helps in analyzing JMeter Dashboard reports after test run?

I found some solution which helps to analyze a single graph.
Free Open source solutions to analyze a single graph...
JMeter Plugins - look onto custom graphs in this package;
JMeter Result Analysis Plugin
JWeter tool for logs analyzing & visualization
Looking for a solution or open source tool which helps to analyze the JMeter dashboard report which having 23 graphs.
I am not sure to fully understand the question , but I'll try to answer.
Have you generated the HTML report as described here ?
https://jmeter.apache.org/usermanual/generating-dashboard.html
This reports provides the most complete information for analyzing a load test.
Is your question about generating it, in this case I pointed you to the manual.
If not, what is your issue with the HTML report , and why do you write "some of them provide better results reporting out-of-box than JMeter's original ones" ?
Since AFAIK, the HTML report provides the same information with even more graphs and dashboards.
Edit:
Regarding analysis of the report, this does not exist currently and this is usually the role of a performance/load test expert.
Maybe in the future through IA, that might exist. So for now, try to understand what each graph provides as information and correlate between them, or ask for assistance of a performance tester.

Difference between shown issues and value in UI

We've got an strange issue. For doing branch reintegration we first analysed the base branch and then the branch to reintegrate on the same project key but different versions.
After having both results we have a strange result. The dashboard shows a different number of new issues than the issue overview for the given project.
When you click e.g. on the 9 new blocker issues you get this number of new issues:
Is there any reason for the difference of given issues? Is this a fault in sonarqube or is there a reason for this result.
We are using Sonarqube 5.4 on a JAVA project.
Thanks for your help.
The dashboard values are calculated at analysis time. So when your analysis completed, there were 9 blocker issues. The numbers in the Issues page are calculated on the fly, so this means that someone
Resolved issues (Won't Fix or False Positive)
Downgraded issues

How to Validate a Cognos report in debug mode ( element-by-element or step-by-step)

We recently migrated from Cognos 10.1.1 to Cognos 10.2.1.1 ( 10.2.1 plus Fix pack1) . Some of our existing reports fail validation now.
From the cogserver.log file , it looks like the BIBUS Process is Crashing on validating the report.
We are working with IBM tech support via a PMR .
Wanted to try if someone here knows if it is possible to Validate a report step by step so that I can get some information or some logs on what element in our report is exactly causing the issue? i.e. Is it possible to do the report validation in a debug mode somehow?
Oh, what a wonderful feature that would be, but to my knowledge nothing like that exists at all. You could try setting the logging on your dispatcher(s) to the maximum to see if you can get any more informative errors.
I would start by trying to view the tabular data for each query individually. If you can identify which query (or queries) are causing your problems, then you can just remove items from the query until it doesn't fail, at which point you should have a pretty good idea of what the source of the problem is.
If that doesn't work, I would just start ripping major chunks of the report out and seeing you can get it to run. For example, if you have a report with 4 charts, delete half of them and try your report. Revert back to the original report, and delete the other half. Get it to work, and then start removing stuff from the half that fails until you can narrow it down to your problem.
It's kind of slow, but these approaches have always worked for me.
On a side note, we're about to make the same upgrade, I'd be interested in hearing what you learn.
EDIT:
Oh, forgot. Make sure you disable DQM and test your reports that way, if you haven't.
Unfortunately, there's no way to debug step by step. Finally got the Core dumps for the crashes, sent them over to IBM Folks ; and they identified it as a known bug in 10.2.1.1. So now we are at 10.2.1.2 (applied Fix pack 2) which solves the issue.

Resources