Static code analysis of Dockerfiles? - sonarqube

I was wondering if there is any tool support for analyzing the content of Dockerfiles. Syntax checks of course, but also highlighting references to older packages that need to be updated.
I'm using SonarQube for static code analysis for other code but if it does not support it (I could not find any information that it does), is there is any other tool that does this?

Although this question is 2 years old, however there are two ways to do static analysis of the Dockerfile.
using FromLatest
using Hadolint
Option#2 is mostly preferable since this can be used as an automated process inside CICD pipelines.
Hadolint also provide ways to exclude messages/errors using ".hadolint.yml"

Related

How to bypass legacy code in sonarQube analysis?

we have a legacy code (10 years old), I want to bypass that code not to be analyzed in SonarQube. Or, SonarQube should scan only recent changes which i made to the legacy code or new files. How to achieve this. I found the CutOff Plugin is deprecated since SonarQube4.0 , we are using SonarQube 7.5
Please help
SonarScanner doesn't support analyzing only part of source code (example: only newer than a specified date). It always scans everything. If you keep your legacy code in other packages than the new code, then you may configure exclusion filter to just ignore the old code. You have to set the sonar.exclusions parameter (comma-separated list with ignored paths). You can read more about Narrowing the Focus in the official documentation.
Be aware that the proposed solution is not recommended. SonarScanner is able to find many vulnerabilities which should be fixed also in legacy code. It can prevent your company against material (e.g. money) and non-material (reputation) losses. The recommended way is to scan all code, and use Quality Gate to prevent introducing new issues. You can read more about it in Fixing the Water Leak.

How to configure PythonAnalyzer to look for standard library typings?

I am creating a PythonAnalyzer using the following code:
var interpreterFactory = InterpreterFactoryCreator.CreateAnalysisInterpreterFactory(
PythonLanguageVersion.V36.ToVersion());
var analyzer = PythonAnalyzer.Create(interpreterFactory);
Later on I also create and analyze a simple python module, that looks like this:
name = input('What is your name?\n')
print('Hi, %s.' % name)
Then I do module.Analysis.GetValuesByIndex("name", 4).
At this moment I expected the "value" to be 'str', because that's what Visual Studio shows when I open the same file in it. However, I get 'object' instead. So it seems that the PythonAnalyzer when constructed as mentioned above lacks some important information about where to look for standard library and/or its types.
Unfortunately, the documentation on PythonAnalyzer is lacking, so I was hoping the community could help understand how to configure it properly.
Congratulations on getting so far :)
What you're hitting here is the fact that CreateAnalysisInterpreterFactory is really intended for "pure" cases, where you have access to all the code that you're trying to analyze and nothing needs to be looked up. It is mostly used for the unit tests, or as a fallback when no copies of Python are installed. Depending on precisely which version of PTVS you are using, the bare information you're getting is either coming from DefaultDB\v3\python.pyi or CompletionDB\__builtin__.idb, both of which are somewhat lacking (by design).
Assuming you have a copy of Python installed, I would suggest creating an instance of InterpreterConfiguration with all of its details, and passing that to CreateInterpreterFactory (without "Analysis").
If you're on the latest sources (strongly recommended), this may run the interpreter in the background to collect information from it (you can control caching of this info with the DatabasePath and UseExistingCache members of InterpreterFactoryCreationOptions). If you are using the older version still, you'll need to trigger a completion DB regeneration or have one that you've created through VS.
And a final caveat: this part of PTVS is currently under some pretty heavy development at time of writing, so you'll either want to keep updating the version you're working against or stick with a slightly older one. Also feel free to post questions like this on the GitHub site, as while this is technically public API, it's barely documented at all and so the best help will come from the dev team.

Sharing set up and tear down methods across multiple selenium projects

So, I have multiple selenium test projects in separate VS solutions and they all have the same Setup() methods which are executed before scenarios are run as well as all having the same TearDown() methods for after. Currently if a change is required for these methods, they have to be updated separately so I was looking in to centralising these methods to be used in all of the test projects/solutions.
I'm relatively new so apologise in advance but does anyone have experience of this with suggestions on approaches I could take? Is it even possible? My tests do not currently run in parallel so is this something I'd need to look in to?
How if you change code be
[OneTimeSetUp] and [OneTimeTearDown]?
in my opinion you should create one class for setup include setup and tear down then in yor test add setup like this public class HREmployeeList:Setup
Hopefully help or not just check this below link
http://toolsqa.com/selenium-webdriver/c-sharp/how-to-write-selenium-test-using-nunit-framework/
One thing you could do is create a new project that contains the functions for setup and teardown and then include that compiled dll in all the other projects. If you need to make a change to setup/teardown, you make the change and compile a new dll and the code change is passed to all the other projects.
Keeping long story short - you need to create a Meta framework.
By concept Meta frameworks provide a method of solving the problem with automating multiple pieces as part of a larger automation strategy. Testers define independent utility classes which can be generically used with any automation tool and can be reused between different automation projects as well. The framework provides an abstraction layer that allows the separate automation pieces to be executed and have their results reported back in a standardized way.
I have a post on the topic, so feel free to take what you need form there.
Since you've tagged VisualStudio, I'll share first the approach my team did use for sharing common functionality across Test projects. What you need is a private Nuget server. Each team compiles and supports a Nuget pack based on the service it provides. For example Selenium code, API calls etc.
Next and probably closer to your case solution would be to utilize git submodules and share the Test harness engine between your projects.
Both approaches will benefit of a Fixture Setup Patterns like Shared Fixture Construction.

Which antlr4-runtime?

I'm trying to add dependency antlr4-runtime in eclipse. It shows two instances to choose from.
com.tunnelvisionlabs::antlr4-runtime (324566 b)
org.antlr::antlr4-runtime (242694 b)
These files are of different size.
Which one should I use?
The reference runtime which is described in The Definitive ANTLR 4 Reference book with JavaDocs posted at antlr.org is the org.antlr::antlr4-runtime.
The other build is a highly experimental branch which is heavily optimized for use in Tunnel Vision Labs' IDE products. This build deviates from the documented version in many ways, so you may be on your own if you run into problems.

Hudson/Jenkins source code metrics?

Are there any useful plugins for source code metrics for Hudson/Jenkins?
I'm looking for total lines of code, total number of tests, classes, etc. with graphing.
Does anything like this exist?
Are you using Java? If so, SONAR should certainly be your first port of call. It does a lot on it's own and also wraps up all the major Java analysis tools, such as:
Out of the box, you'll get metrics on:
Potential Architectural & Design issues
Unit test coverage (uses cobertura)
Lines of code\packages\classes etc
Potential bugs
Code duplication
Adherence to code formatting standards
(plus many more)
It allows you to traverse from the high level analysis through to the source code it relates to. It will be easier if you're using Maven for your build though...
There is a Hudson plugin. And it's free.
Try CCCC (http://sourceforge.net/projects/cccc/). It does code counting, module counting (classes), etc., and the plugin also graphs it for you. (for C, C++)
Incidently, what language are you looking at?
There's also CLOC (Count lines of Code) which will tell you how many lines of each language you have, although I can't seem to find a link for it.
You don't specify which language you are using, but Redsolo's awesome blog post Guide to building .NET projects using Hudson shows you how to use FxCop and NUnit on Hudson to give some of what you are looking for. The Violations plugin used also supports Simian, CPD, PMD and PyLint.

Resources