SonarQube - ignore thirdparty javascript EXCEPT for security section? - sonarqube

Trying sonarqube 6.0
Hi sonarqube experts
I've used the sonar.exclusion in the past, but its all-or-nothing kind of deal.
Given the following sections in the sonarqube dashboard:
Reliability
Security
Maintainability
Duplications
Size
Complexity
Documentation
Issues
Is there a way to cross-filter selected directorys (such as those containing thirdparty javascript libraries) so they are excluded except for Security as an example?
Usecase is to, when configuring Quality Gates, to not worry about maintainability of thirdparty javascript libraries, but to very much worry about their vulnerabilities.

Behind this request, I've the feeling that you're looking for a tool able to detect usages of API with known MITRE-CVE vulnerabilities. If this is the case, then SonarQube won't be of any help you covering this need.

Related

Can we replace the Static application security testing SAST Tool like (Fortify, Checkmarx and IBM Appscan) with SonarQube

Can we replace the Static application security testing SAST Tool like (Fortify, Checkmarx and IBM Appscan) with SonarQube.
As per the SonarQube Roadmap Docs 8.1 (https://docs.sonarqube.org/latest/) says it covered all the security rules originated from establish standard: CWE, SANS Top 25, and OWASP Top 10.
I this area no tool is the same. So when you run all those tools on the same code you will get some similar findings, some new one's and some missing (maybe false positives), depending how they implement the tool. Given the fact that SonarQube is relatively new in this field I would suggest using some other tool for this specific area also. Be aware that achieving a 100% detection result is extremely difficult/impossible.
No you could definitively not actually. The coverage of Sonar is not the same thing you should view. You must understood how they made the detections, the number os False Positive/Negative etc...
Fortify and Checkmarx do analysis of the flow inside your code. They could analyses the control you made before anything. Sonarqube is more rules based and not flow based.

Tracking api/even changes between different microservice versions before deployment

I work devops for a fairly large company that is in process of transitioning to microservices. This is a new area for most people involved and some of the governing requests seem like bad practice to me but I don't have the expertise to convince otherwise.
The request is to generate a report before deploying that would list any new api/events (Kafka is our messaging service) in a microservice.
The path that's being recommended is for devs to follow a style guide and then scrape the source code during CI/CD pipeline to generate a report that can be compared to previous reports and identify any new apis.
This seems backwards and unsustainable but I've been unable to find another solution that would satisfy their requests. I've recommended deploying to dev first, then using a tracing tool to identify any api changes, or event subscriptions, but they insist on having the report before deploying.
I'm hoping for any advice on best practice to accomplish this.
Tracing and detecting version changes is definitely over engineering. Whats simpler like #zenwraight has mentioned, is to version your APIs. While tracing through services to explore the different versions and schema could be a potential solution, it requires a lot more investment upfront and if thats not the bread and butter of the company, I would rather use a vendor product that might support something like this.
If discovery is a mechanism that is needed, I would recommend something that publishes internal API docs using a tool like Swagger so that you can search if there's an API you can consume.
And finally to support moving to different versions, I would recommend having an API onboarding process for the services so that teams can notify other teams that are using specific versions their services are coming to the end of their lifecycle and they will need to migrate to newer ones.

pentest-verify checklist after cheked

After pentesting and checking the check-list, how can I reassure my client that these checks are done and vulnerabilities patched? (of course for something like sqli, showing is obvious)
But I mean somewhere to verify or something like this?
Thanks
For test checks that are done you can provide different reports generated by tools or manual testing (depending on vulnerability type) for those specific checks.
While for patched vulnerabilities, you will need to re-test the platform again and provide the changed reports either generated from tools or manual testing that will show different output by indicating the vulnerability is no longer present.
For further re-assurance you can also add the vulnerability exploitation reproducing steps on the report. So if the client wants to test it themselves they can do it (and get assured that it was fixed).
You need to describe all methodologies used like OSSTMM, OWASP, NIST. Is very important too talk about the perimeter tested (web like forms, api, frameworks, network protocols,etc).
However, you can create a topic every step tested using Top10Owasp:
Broken Authentication
Sensitive data exposure
XML External Entities (XXE)
Broken Access control
Security misconfigurations
Cross Site Scripting (XSS)
Insecure Deserialization
Using Components with known vulnerabilities
Insufficient logging and monitoring
This way you ensure that your test was compliance.

Finding rules with 0 instances in sonar?

TL;DR: Basically what I am looking for is a way to get a list of all sonar rules that have 0 issues raised. I could then move all of those to blockers and protect myself from someone adding that issue in the future.
My company is using sonar and static analysis to help guide refactoring and development of a sizable legacy codebase (~750K LOC). We have had a lot of success by lowering the severity of most rules and then choosing a smaller set of rules to promote up to blocker or critical as we find real issues in the code. This has kept the number of issues we are trying to address at a time manageable so we can actually feel like we are making progress and not drown in the noise of legacy issues.
In particular when we have been bitten by a field or QA issue that sonar could have detected we turn that issue up to a BLOCKER and fix every instance of in. These blockers break the build and we are now assured that we wont add a new instance of the same issue again. This has worked great and has kept a number of what would be nasty bugs from slipping through.
The big problem with that methodology is we need to have an example of every one of those classes of mistake atleast once in the codebase so we could learn that it was important and should be made a blocker. Any issues we haven't already encountered will still be at their default level, I'd like to move all of them up to BLOCKER now so we notice the day they are added.
Edit: Currently we are using 3.7.3 but we are about to upgrade to 5.X.
There are 2 ways to do this:
1- The difficult way is to query the SonarQube database. You have to understand the tables and write a SQL query based on which DB is used for your SonarQube. You Can find some reference here - OR here
2- I have never tried your method but it should work. You can use Sonar Web Service API. You also have a Web Service Java Client. Reference :
link1,link2,link3

Suitable frameworks for ERP like application

I want to create a production management system to be used by a small manufacturing firm. The system will allow to document different stages in manufacturing of equipment. The requirements are as follows:
1.Non browser based interface.Need something like Swing or AWT based.While i understand the convenience of implementing a browser based solution,the business owner insists on a non browser interface
2.Accessed from multiple systems.These systems will allow CRUD operations on the central system (Thin Client?)
3.The application will not have more than 3 concurrent users.
I need some advice regarding what would be a good path for this kind of application.Currently, i'm thinking of using Griffon with RMI. However, i don't have much development experience.Read a bit about Apache River (Jini) too.Would it be a good idea to use Griffon with RMI?
Please provide some advice. Thanks.
EDIT:after some reading, i've decided to use more mainstream frameworks.So, Griffon is not an option. How about Jini(Apache River) or OSGI (Apache Felix)?
Hmm how is that a project which recently moved out of the incubation phase be considered mainstream vs a project that's been used in production for more than 3 years now? Anyway, Apache River gives you access to Jini technology and nothing more; meaning you can't achieve item #1 of your list with River alone. River may use RMI for accessing remote resources, however you can use RMI directly, or try out DRMI, Kryonet, Hessian/Burlap, Spring's HTTP Invoker, Protocol Buffers, Avro/Thrift, REST, SOAP, ZMQ and many more.
Even if you choose one of these options and/or River you still have to define the following things
application structure (file structure and runtime behavior)
build setup
dependency management
testing profiles
packaging
deployment strategies
These things and more are what Griffon brings to the table. As you may have noticed the framework allows you to build up applications by adding plugins, reducing thew amount of time you must allot for hunting down dependencies, setting up bootstrap mechanism and getting things done. On the subject of remoting technologies have a look at the different options Griffon has to offer http://artifacts.griffon-framework.org/tags/plugin/remoting
Even more, you can also combine OpenDolphin (http://open-dolphin.org/dolphin_website/Home.html) with Griffon. There's even an example application found at the opendolhpin repository showing a full client-server application (build with Griffon, Grails and OpenDolphin) https://github.com/canoo/open-dolphin/tree/master/dolphin-griffon-crud
With what seems to be your current understanding of the problem, I would not recommend OSGI, especially for a small manufacturing firm (Possible maintenance issues, depending on the "personel").
The main reason why I wouldn't advocate JINI or OSGI in your case is because of what you said
However, i don't have much development experience.
JINI (Apache River) is a viable option as long as you fully understand the concepts of LookupService and service registrations, etc. There's tons of RMI going on here with possible firewall implications...
OSGI is not difficult but you may have issues deciding how to structure your applications as well as interacting with services, etc.
Try to stick to the simplest approach that you can handle for the implementation (Flexible design in mind): Make it work and then improve it.
There are simple Web Services options such as Spring Remoting (over http/https for example), unless Spring introduces too many concepts and headaches for your app.

Resources