As there is demand of a reliable software systems i.e. site should be reliable (SRE), there is need to check if the code is observable or not ? Sonarqube has any rules to check for the same or not ?
Related
Can we replace the Static application security testing SAST Tool like (Fortify, Checkmarx and IBM Appscan) with SonarQube.
As per the SonarQube Roadmap Docs 8.1 (https://docs.sonarqube.org/latest/) says it covered all the security rules originated from establish standard: CWE, SANS Top 25, and OWASP Top 10.
I this area no tool is the same. So when you run all those tools on the same code you will get some similar findings, some new one's and some missing (maybe false positives), depending how they implement the tool. Given the fact that SonarQube is relatively new in this field I would suggest using some other tool for this specific area also. Be aware that achieving a 100% detection result is extremely difficult/impossible.
No you could definitively not actually. The coverage of Sonar is not the same thing you should view. You must understood how they made the detections, the number os False Positive/Negative etc...
Fortify and Checkmarx do analysis of the flow inside your code. They could analyses the control you made before anything. Sonarqube is more rules based and not flow based.
I am trying to finalize desgin for my use case of a REST application.
It is like a online order application where it will accept the order details, process it and
and once processing is finished it will update the status in datbase.
during fulfilling there can be multiple task which will be invoked. There will be another REST end point which will be used to get the status of order.
So there will be state transaction like below
Received --> Fulfilling --> Fulfilled
I stumble upon spring-statemachine framework and looks interesting. Considering above use case
is spring-statemachine right choice for it ? Also is there any example project to understand
in much details.
Considering above use case is spring-statemachine right choice for it
?
Yes, Spring state machine is a good choice for this use-case.
Also is there any example project to understand in much details.
Yes, there are a lot of example projects and in fact, there's one for order shipping/processing:
official order shipping recipe documentation
official order shipping github repo
Is there some way effectively monitor http in Jenkins master? Monitor plugin provides some graphs and tables but it is really hard to understand how the end user ‘feels’ while working on the Jenkins web page. Is there any page freezes? What is the maximum delay? Maybe I don’t use the Monitor plugin correctly, so is there any way to harvest such a data from provided graphs?
The monitoring plugin is the one that I use. It tells me which pages are taking the longest time to load/hang/etc. I"m not sure if there is something else out there that can do something like this. Maybe a third party tool like newrelic perhaps.
I'm working on integrating performance tests with CI/CD infrastructure. My performance testing tool is JMeter and CI server is Jenkins. Both can do their jobs, but when it comes to integrating performance tests in CI/CD pipeline, things are no longer so trivial.
To have proper deployment pipeline CI server needs to know when a performance test build should be considered passed or failed. Verification of mean average of response times is not a good option - completely different SLAs can apply to different types of transactions executed as part of the same JMX file. Asserting on mean average of response times for particular transaction type is a far better option, but it is still far from a perfect solution. This won't tell us e.g. if response times for same type of transaction are increasing (which can has something to do with memory leaks) or decreasing (which can be a blessing of server-side cache). For that reason, relying only on mean avg response times can create false confidence in software quality.
I analysed a couple of tools, including JMeter Maven Analysis Plugin and Jenkins Performance Plugin. None of them seems to offer what I'm looking for.
In pre-CI era, performance tests were executed late in the development lifecycle and analysed by a human being. I wonder if anyone came across any advance enough tool, which can make it possible for CI server to reliably determine if perf test build should be marked as passed or failed, without verification of results by a human?
In absence of tools offering what I'm looking for, I've decided to kick off an open source project create one on my own, in my free time:
https://github.com/automatictester/lightning
It's still in its early days, but the core functionality is there. Now it's a matter of time to extend it with extras.
I am a Jenkins expert but only mildly skilled with JMeter. Can your JMeter results be processed by a script in order to tell when Transaction Type Z passes beyond the acceptable time limit for the run?
It looks like parsing the jmeter results with some additional logic is what you need to be able to bubble up an exit(1) (or any non-zero) value.
TL;DR: Basically what I am looking for is a way to get a list of all sonar rules that have 0 issues raised. I could then move all of those to blockers and protect myself from someone adding that issue in the future.
My company is using sonar and static analysis to help guide refactoring and development of a sizable legacy codebase (~750K LOC). We have had a lot of success by lowering the severity of most rules and then choosing a smaller set of rules to promote up to blocker or critical as we find real issues in the code. This has kept the number of issues we are trying to address at a time manageable so we can actually feel like we are making progress and not drown in the noise of legacy issues.
In particular when we have been bitten by a field or QA issue that sonar could have detected we turn that issue up to a BLOCKER and fix every instance of in. These blockers break the build and we are now assured that we wont add a new instance of the same issue again. This has worked great and has kept a number of what would be nasty bugs from slipping through.
The big problem with that methodology is we need to have an example of every one of those classes of mistake atleast once in the codebase so we could learn that it was important and should be made a blocker. Any issues we haven't already encountered will still be at their default level, I'd like to move all of them up to BLOCKER now so we notice the day they are added.
Edit: Currently we are using 3.7.3 but we are about to upgrade to 5.X.
There are 2 ways to do this:
1- The difficult way is to query the SonarQube database. You have to understand the tables and write a SQL query based on which DB is used for your SonarQube. You Can find some reference here - OR here
2- I have never tried your method but it should work. You can use Sonar Web Service API. You also have a Web Service Java Client. Reference :
link1,link2,link3