Custom plugin for checkmarx - ibm-integration-bus

We are writing a large application using IBM Integration Bus and using ESQL as major language for transformation. We are investigating CheckMarx for static code analysis and scanner. But CheckMarx does not support ESQL out of the box.
Is it possible to write a custom plugin for CheckMarx, to make to able to scan and analyse ESQL code as well? I can't find any online resource for the same.

When using Checkmarx, it is quite easy to create your own custom queries and fine tune the scans for the supported languages.
Since esql files are not yet supported by Checkmarx, it means esql is not parsed. It is not possible to write a custom plugin.
You can contact Checkmarx Support and ask if scanning esql is planned in the future.

The selected answer is not entirely true. Informix ESQLC files are first parsed and intermediate C files are created. This means that Checkmarx's support for the C programming language could be used to accommodate the files provided you use the -keep option when generating the intermediate files. The same is true for 4GL files that Informix uses. The major problem here is that it would be difficult to map the original source line to the generated C code's line. The results would be hard to consume.

Related

Google.Protobuf vs protobuf-net [duplicate]

I've recently had to look for a C# porting of the Protocol Buffers library originally developped by Google. And guess what, I found two projects owned both by two very well known persons here: protobuf-csharp-port, written by Jon Skeet and protobuf-net, written by Marc Gravell. My question is simple: which one do I have to choose ?
I quite like Marc's solution as it seems to me closer to C# philisophy (for instance, you can just add attributes to the properties of existing class) and it looks like it can support .NET built-in types such as System.Guid.
I am sure both of them are really great projects but what's your oppinion?
I agree with Jon's points; if you are coding over multiple environments, then his version gives you a similar API to the other "core" implementations. protobuf-net is much more similar to how most of the .NET serializers are implemented, so is more familiar (IMO) to .NET devs. And as Jon notes - the raw binary output should be identical so you can re-implement with a different API if you need to later.
Some points re protobuf-net that are specific to this implementation:
works with existing types (not just generated types from .proto)
works under things like WCF and memcached
can be used to implement ISerializable for existing types
supports inheritance* and serialization callback methods
supports common patterns such as ShouldSerialize[name]
works with existing decorated types (XmlType/XmlElement or DataContract/DataMember) - meaning (for example) that LINQ-to-SQL models serialize out-of-the-box (as long as serialization is enabled in the DBML)
in v2, works for POCO types without any attributes
in v2, works in .NET 1.1 (not sure this is a huge selling feature) and most other frameworks (including monotouch - yay!)
possibly (not yet implemented) v2 might support full-graph* serialization (not just tree serialization)
(*=these features use 100% valid protobuf binary, but which might be hard to consume from other languages)
Are you using other languages in your project as well? If so, my C# port will let you write similar code on all platforms. If not, Marc's port is probably more idiomatic C# to start with. (I've tried to make my code "feel" like normal C#, but the design is clearly based on the Java code to start with, deliberately so that it's familiar to those using Java as well.)
Of course one of the beauties of this is that you can change your mind later and be confident that all your data will still be valid via the other project - they should be absolutely binary compatible (in terms of serialized data), as far as I'm aware.
According to it's GitHub project site protobuf-csharp-port has now been folded into the main Google Protocol Buffers project, so it will be the official .NET implementation of protobuf 3. protobuf-net however was last updated in 2013, although there have been some commits recently in GitHub.
I just switched from protobuf-csharp-port to protobuf-net because:
protobuf-net is more ".net like", i.e. descriptors to serialise members instead of code generation.
If you want to compile protobuf-csharp-port .proto files you have to do a 2 step process, i.e. compile with protoc to .protobin and then compile that with protoGen. protobuf-net does this in one step.
In my case I want to use protocol buffers to replace an xml based communication model between a .net client and a j2ee backend. Since I'm already using code generation I'll go for Jon's implementation.
For projects not requiring java interop I'd choose Marc's implementation, especially since v2 allows working without annotations.

Prevent Protobuffer from renaming Fields (Classes, Members, Enum Items)

I am trying to port a project from Google Protocol Buffers 3.0.0-beta-2 to 3.1.0. After recompiling my .proto file I noticed that I had a number of compilation errors with the project due to protoc enforcing a coding standard that I did not choose and renaming fields accordingly. I do not want to rename e.g. MDData to Mddata or XYServer to Xyserver inside the project since the intended meanings of the abbreviations are now lost and possibly subject to change in further Protocol Buffer releases to come.
I have seen this behaviour on the C# part so far and am not sure if this is also the case for generated code for C++.
TL;DR:
Is there a way to disable automatic code style changes inside Google Protocol Buffer's Proto Compiler (and keep my own formatting) of fields?
There is no way to enforce this short of writing your own code generator. Only the public API of the stubs is considered stable.
Under the hood, the protoc compiler regenerates the code from scratch each time, so there is no way for it to know the original style of the file. It would need to be passed in the original generated file along with the proto in order to do this.
That said, if you want to modify the code generator, it is certainly possible.

C unit test frameworks with Sonar

Is CppUnit the only C/C++ unit test framework currently available for use with Sonar?
What would be involved in adding additional C/C++ unit testing frameworks? (e.g. how many lines of code is the CppUnit plugin, how reusable, etc.)
I think you should better send your queries in Sonar's mailing lists : http://www.sonarsource.org/support/support/
See the unit test page: http://docs.codehaus.org/display/SONAR/Unit+Test+Support
From that page:
The C++ Plugin parses xunit compliant format using the
sonar.cxx.xunit.reportPath. To use other formats, first they need to
be converted using the property sonar.cxx.xunit.xsltURL
For convenience the following xsl are provided
boosttest-1.x-to-junit-1.0.xsl For transforming Boost-reports
cpptestunit-1.x-to-junit-1.0.xsl For transforming
CppTestUnit-reports cppunit-1.x-to-junit-1.0.xsl For transforming CppUnit-reports
So packages that support xUnit format, like Google Test Framework, should be supported. Otherwise, if they output xml they should be supportable by changing the xslt.

Can the new phoenix code analysis engine in vs2010 analyze source level or catch preprocessor calls?

I'm hoping there is someway built into VS2010 to have custom rules involving preprocessor usage, and source-level style/member ordering.
Does it do source level, or catch preprocessor calls?
No. Like the introspection engine, the Phoenix-based data flow engine analyzes IL, not source code. If you're interested in writing rules that work against source code, StyleCop would be a better candidate tool than FxCop.
If you want to do source code analysis on C# or C++, you might consider our DMS Software Reengineering Toolkit and its C# Front End or C++ Front End.
DMS, using the corresponding front end, parses source text to abstract syntax trees, and then provides a large set of libraries to support the coding of custom analyzers.
In doing the parsing, it retains the preprocessor directives (as well as generics, comments, etc.) as part of the tree, and they can be analyzed just like the rest of the code.

Are there any extensions for either Boost.Test or cppUnit which could provide HTML outputs etc?

I am involved in development of unit level test cases for our project. There are both managed code and native C++ code. After some study I chose NUnit for managed code. I would either use Gallio or FireBenchmarks which is an extension to provide HTML outputs and charts etc.
Do we have extensions like this for cppUnit or Boost.Test ? I have not decided which one to use. If there are none, which of these would be easier to extend to enable such a plugin ?
Please give your suggestions on this.
You can configure Boost.Test to generate XML output. The doc says:
This log format is designed for
automated test results processing. The
test log output XML schema depends on
the active log level threshold.
This can be enabled by specifying -output_format=XML on the command line, or by setting the environment variable BOOST_TEST_OUTPUT_FORMAT=XML. The related docs are here.
It is also possible to configure Boost.Test at compile time to produce XML output by default (described here)
In order to generate HTML you either need to implement your own formatter (which is possible, but nicely underdocumented, so please ask on the list) or to transform the XML in a postprocessing step.

Resources