I have a setup where Visual Studio 2010 runs test coverage analysis and it's output is absorbed by NDepend during an integration build.
A few assemblies contain generated code that needs to be ignored by NDepend.
Is there a way to do this? Preferably an entire namespace.
Code Query and Rule over LINQ (CQLinq) indeed provides a facility to ignore generated code.
There is the convenient predefined domain named JustMyCode of type ICodeBaseView.
The domain JustMyCode represents a facility of CQLinq to eliminate generated code elements from CQLinq query results. For example the following query will only match large methods that are not generated by a tool (like a UI designer):
from m in JustMyCode.Methods where m.NbLinesOfCode > 30 select m
The set of generated code elements is defined by CQLinq queries prefixed with the CQLinq keyword notmycode. For example the query below matches methods defined in source files whose name ends up with ".designer.cs":
notmycode from m in Methods where
m.SourceFileDeclAvailable &&
m.SourceDecls.First().SourceFile.FileName.ToLower().EndsWith(".designer.cs")
select m
The CQLinq queries runner executes all notmycode queries before queries relying on JustMyCode, hence the domain JustMyCode is defined once for all. Obviously the CQLinq compiler emits an error if a notmycode query relies on the JustMyCode domain.
There are 4 default notmycode queries, easily adaptable to match your need. Notes that there is no default notmycode queries for namespaces but you can create your own one(s):
Discard generated Assemblies from JustMyCode
Discard generated Types from JustMyCode
Discard generated and designer Methods from JustMyCode
Discard generated Fields from JustMyCode
Found this in the "Quick summary of methods to refactor"
// Here are some ways to avoid taking account of generated methods.
!( NameIs "InitializeComponent()" OR
// NDepend.CQL.GeneratedAttribute is defined in
// the redistributable assembly $NDependInstallDir$\Lib\NDepend.CQL.dll
// You can define your own attribute to mark "Generated".
HasAttribute "OPTIONAL:NDepend.CQL.GeneratedAttribute")
But that doesn't address the need to modify every CQL query to ensure they all ignore the generated code.
Related
I'm creating code for interfaces specified in IBM Rational Rhapsody. Rhapsody implicitly generates include statements for other data types used in my interfaces. But I would like to have more control over the include statements, so I specify them explicitly as text elements in the source artifacts of the component. Therefore I would like to prevent Rhapsody from generating the include statements itself. Is this possible?
If this can be done, it is mostly likely with Properties. In the feature box click on properties and filter by 'include' to see some likely candidates. Not all of the properties have descriptions of what exactly they do so good luck.
EDIT:
I spent some time looking through the properties as well an could not find any to get what you want. It seems likely you cannot do this with the basic version of Rhapsody. IBM does license an add-on to customize the code generation, called Rules Composer (I think); this would almost certainly allow you to customize the includes but at quite a cost.
There are two other possible approaches. Depending on how you are customizing the include statements you may be able to write a simple shell script, perhaps using sed, and then just run that script to update your code every time Rhapsody generates it.
The other approach would be to use the Rhapsody API to create a plugin/tool that iterates through all the interfaces and changes the source artifacts accordingly. I have not tried this method myself but I know my coworkers have used the API to do similar things.
Finally I found the properties that let Rhapsody produce the required output: GenerateImplicitDependencies for several elements and GenerateDeclarationDependency for Type elements. Disabling these will avoid the generation of implicit include statements.
I have a set of XML schema definition resources (files). These files contain mutual import and include directives. For a specific purpose users will instantiate element definitions in a particular XSD. I would like to provide them with an excerpt that contains only the XSD resources required for the task. This means I need to trace all imports and includes to other resources recursively, until I have I set. (A Kleene Star or transitive closure).
I assume that this is implicitly done when I validate the schemata from the entry point. So there might be a call back that lists all dependencies resolved during the process that I can tap into.
The other solution I see is to use DOM and manually parse each schema for the import and include elements. This seems clunky, however.
I think the most convenient way to do this would be with an XSLT stylesheet to which you provide a list of starting points (URIs, or if you need to be careful about chameleon inclusion, namespace-name/URI pairs), and which then fetches the documents and computes the transitive closure, emitting either a list of URIs (or, again, namespace / URI pairs) or a sequence of XSD schema documents.
XQuery could also be used.
And as you suggest, DOM could also be used, with the host programming language of your choice. (I'd do it in XSLT or XQuery, myself, but that's because I do most of my programming in those languages.) Some validators may provide an API for getting a list of the schema documents consulted, or you may be able to extract that information from a validator's representation of the PSVI; APIs to XSD validation are not standardized.
Note that in the general case you need to watch out for and handle xsd:redefine and xsd:override, not just xsd:include and xsd:import.
And of course, if this is a one-shot task and the number of modules is likely to be less than fifty, it may be faster to do it by hand than by writing a program to do it automatically.
Currently I'm working on/in a project that doesn't have code analysis turned on.
What I'd like to do is just run CA against the files that I work with/touch before I check them in but there's some limitations:
I don't have the option to turn it on for the project. Even if I did...
The project is huge; CA takes an age to run, and the warnings are numerous. Picking out the files I touched from the list would be a needle in a haystack.
Anyone have any ideas?
An idea would be to run code rules through the tool NDepend (Disclaimer: I am one of the developer of the tool).
What I'd like to do is just run CA against the files that I work with/touch
Concerning this first point, NDepend let's write code rules through LINQ queries, and one facility proposed is to query the code diff between the current code version, and a previous version of the code (the baseline). Hence the user can write a code rule that focuses only on what has been changed between now and the baseline.
Around 200 default code rules are proposed, for example Avoid making complex methods even more complex. If we look at the LINQ code of this rule, we can see that first, it filters only methods where CodeWasChanged, to then detect the ones that were complex enough, and that became more complex. Method complexity is defined here through the popular code metric Cyclomatic Complexity.
// <Name>Avoid making complex methods even more complex (Source CC)</Name>
warnif count > 0
from m in JustMyCode.Methods where
m.CodeWasChanged() // <-----
let oldCC = m.OlderVersion().CyclomaticComplexity
where oldCC > 6 && m.CyclomaticComplexity > oldCC
select new { m,
oldCC ,
newCC = m.CyclomaticComplexity ,
oldLoc = m.OlderVersion().NbLinesOfCode,
newLoc = m.NbLinesOfCode,
}
All default or custom rules can be adapted to be restricted to code that has been refactored or code that has been introduced, since the baseline. The group of code rules Code Quality Regression or API Breaking Changes contains these adapted code rules out-of-the-box.
Concerning the point CA takes an age to run, and the warnings are numerous. you can try NDepend now, and see that it won't make you wait. It'll take a few seconds to analyze your large code base and execute the 200 default rules (that can be customized easily since they are just C# LINQ queries).
In our API library, we have a number of classes that implement a method ComputeCurrentDefinitionHashCode, which combines the hash codes of each member field with a pseudo-random number that should be unique to that class.
This is based on Paul Hsieh's "SuperFastHash" at http://www.azillionmonkeys.com/qed/hash.html
I'm trying to determine if it's possible to use FxCop to ensure that the randomly generated number we put in each class is not duplicated in any other class.
In other words, can we save information from one class to the next?
Yes, you can construct an FxCop rule that caches information across classes. However, depending on how you include the target number in your classes, this may or may not be a particularly good candidate for an FxCop rule. For example, if it is a literal passed as an argument to a base class constructor, then an FxCop rule might be an OK choice. However, if the source of the number is less "predictable", a unit test approach might be preferable.
What I want to achieve is more or less the inverse of this:
http://www.olegsych.com/2008/07/t4-template-for-generating-sql-view-from-csharp-enumeration/
I have a value group table (enum names) and a values table (enum values), and want to turn those into enums. Both are in SQL Server, and both do happen to be in an .edmx (so there would be quite a few ways to read the values).
Is there something "out there" that already does this (and I didn't find it)? If not, what would be the best way to go about reading the data (SMO, EDMX with dynamic loading, ...)
I've put some more effort into writing such a template so it does all these:
generates enumeration values with explicit integer values;
uses Visual Studio's namespace naming convention so generated enumerations have project's default namespace with any subfolders appended (just like any code file in Visual Studio);
adds complete enumeration XML documentation by using additional description table column values; if you don't have these never mind;
correctly names the generated file and adds an additional attribute in the code so the generated enum doesn't get scrutinised by code analysis;
multi-word lookup table values are correctly concatenated to pascal-cased equivalents (ie. Multi word value becomes a MultiWordValue);
enumeration values always start with a letter;
all enumeration values consist of only letters and numbers, everything else gets cut out;
Anyway. Everything is very well documented in this blog post.
Ok, here's how I implemented it:
Use VolatileAssembly from the T4 Toolbox to reference an assembly that...
Implements a T4 Helper class that does all the database work (when using EF, make sure to use a connection string when instantiating the context)
In the .tt, simply call into the T4 helper class to get the data you need and create your class(es)