Having previously posted EclipseLink MOXy #XmlPath support for axes/parent
I'm still working with #XmlPath annotation and it appears to me predicate inequality isn't supported?
#XmlPath("node[#attr != 'a']")
Also valid for me would be to check for the existance of an attribute
#XmlPath("node[#x]")
Or even better, that it doesn't exist
#XmlPath("node[not(#x)]")
Is there a heavy cost introducing these features? I'm used to having the whole XPath array of features available. MOXy has provided me some fantastic functionality I'm very grateful for, but it seems a little inflexible in this regard.
Note: I'm the EclipseLink JAXB (MOXy) lead and a member of the JAXB 2 (JSR-222) expert group.
MOXy currently supports:
#XmlPath("node[#x='foo']")
but not:
#XmlPath("node[#x!='foo']")
#XmlPath("node[#x]")
#XmlPath("node[not(#x)]")
Background
MOXy currently uses the same XPath for both marshalling and unmarshalling. It is 100% clear what each of the unsupported XPaths mean in terms of unmarshalling, it is a little less clear in terms of marshalling (but probably not unsolveable).
Action Items
Please enter an enhancement request for this functionality (Specify MOXy as the component). Bugs entered by external users are given priority in our backlog.
https://bugs.eclipse.org/bugs/enter_bug.cgi?product=EclipseLink
For More Information
http://blog.bdoughan.com/2011/03/map-to-element-based-on-attribute-value.html
Related
What is currently the best way to write SonarQube 4.5 checks for :
Bytecode Analysis
Source Analysis
Unfortunately, I could not find an up-to-date web page providing a clear explanation, and I see that existing checks use many deprecated classes and methods, use "bridges" about to be abandonned, checks are regularly removed from the codebase (such as the XPath rule).
I would like to be sure that the checks I'm about to write will be well written and durable.
So...
should I use BytecodeVisitor to analyse bytecode ?
should I use BaseTreeVisitor to analyse sourcecode ?
What is the replacement for org.sonar.api.rules.RuleRepository ?
What is the replacement for org.sonar.api.resources.Java ?
What is the replacement for org.sonar.api.rules.AnnotationRuleParser ?
How can I write XPath like rules (BaseTreeVisitor is using SSLR and if I'm not wrong SonarQube is moving away from SSLR / AbstractXPathCheck is part of sslr squid bridge.)
What else should I know ?
In other words I'm a bit lost.
Thank you in advance for your help.
First thanks for your feedback,
There are in fact many questions in your question (sic) :
The way to write custom checks for Java as of today is to use the BaseTreeVisitor. All the other ways are now deprecated and we are working to be able to remove them (but it is not always straightforward as some of them requires a complete semantic analysis to be removed). What is currently lacking from this api is the access to the semantic analysis to be able to request type information read from bytecode.
You can have a look at this project : https://github.com/SonarSource/sonar-examples/tree/master/plugins/java-custom-rules
For all the other questions, please ask them on the mailing list.
(Small notes though : BaseTreeVisitor does not use SSLR directly, the java plugin is not moving away from SSLR rather from one class, specifically ASTNode, in order to work on a SyntaxTree with a class specific for each type of node, The drop of Xpath checks occurs in that logic of moving away from a non-typed SyntaxTree).
I'm looking for a definition of the syntax for the Cypher query language. I tried the docs but they're very vague.
Ideally, I'd like a BNF (or any variant) definition, or one of those "graph" definitions like this or this. Really, anything resembling a formal definition.
What you are looking for will be available in openCypher. Several items will be released as part of the project, one of the first of which is the BNF grammar.
Update 2016-01-30: A first draft of the grammar is now avialable at \https://github.com/opencypher/openCypher/blob/master/grammar.ebnf.
Update: 2016-10-17: EBNF and Antlr grammars, TCK, railroad diagrams, and a list of community projects are available at http://www.opencypher.org/#resources
Take a look at the recently announced (Oct 2015) openCypher project. It involves releasing the language specification, among other things.
From the announcement:
1. Cypher reference documentation:
A comprehensive user documentation describing use of the Cypher query language with examples and tutorials.
2. Technology compatibility kit (TCK):
The TCK consists of a number of tests that a software supplier would run in order to self-certify support for a given version of Cypher.
3. Reference implementation:
Distributed under the Apache 2.0 license, the reference implementation is a fully functional implementation of key parts of the stack needed to support Cypher inside a data platform or tool. The first planned deliverable is a parser that will take a Cypher statement and parse it into an AST (abstract syntax tree) representation. The reference implementation complements the documentation and tests by providing working implementations of Cypher – which are permissively licensed – and can be used as examples or as a foundation for one’s own implementation.
4. Cypher language specification:
Licensed under a Creative Commons license, the Cypher language specification is a technical expression of the language syntax to enable parsers to auto-generate the query syntax. A full semantic specification is also planned as a part of the openCypher project.
The same announcement also says that the process is open and that it is possible to submit, review and comment on language proposals.
Update!
Neo4j has changed a lot since this answer was written. In 2017 the simple answer is yes, you can download the grammar files from https://www.opencypher.org/
Below is the old answer, which was accurate in 2014
As far as I can tell, the only formal definition is in the code. That's the bad news.
The good news is that the code uses a scala library to do the parsing which makes the code rules look kinda/sorta like BNF. And there's some documentation on how to read it.
Here's a link into a scala object that defines what a query is.
This general package on github looks to me like it contains all of the cypher command implementations, and should have everything you're asking for.
Code in this package is written in scala, and looks like this:
object Query {
def start(startItems: StartItem*) = new QueryBuilder().startItems(startItems:_*)
def matches(patterns:Pattern*) = new QueryBuilder().matches(patterns:_*)
def optionalMatches(patterns:Pattern*) = new QueryBuilder().matches(patterns:_*).makeOptional()
def updates(cmds:UpdateAction*) = new QueryBuilder().updates(cmds:_*)
def unique(cmds:UniqueLink*) = new QueryBuilder().startItems(Seq(CreateUniqueStartItem(CreateUniqueAction(cmds:_*))):_*)
(...)
This matches roughly with the upper right hand quadrant of the Cypher refcard. You can sorta see that there can be a start clause, a match clause, and so on. This includes links to other implementation classes (like UpdateAction which further define clauses considered update actions.
Make sure to also read How Neo4J Uses Scala's Parser Combinator: Cypher's Internals Part 1 for more information on what's going on here, and the mapping between the scala classes and what we'd normally consider EBNF. This blog post is old (2011) and the specific code examples it gives shouldn't be trusted, but I think it has good general information on how the implementation works, and what to look for if you want to understand the EBNF behind cypher.
Disclaimer: I'm not a scala hardcore, YMMV, IANAL, devs please overrule me if I'm wrong.
(Michael Hunger answered in a comment, so I can't accept his answer. Here's his answer:)
Cypher uses parboiled as parser, the parboiled rule DSL are pretty easy to read and understand. https://github.com/neo4j/neo4j/blob/d18583d260a957ab1a14bd27d34eb5625df42bc5/community/cypher/cypher-compiler-2.2/src/main/scala/org/neo4j/cypher/internal/compiler/v2_2/parser/Clauses.scala
None of these seem to work any more.
I don't see anything on the opencypher.org site that looks like a grammar to download.
None of the github links from Michael Hunger work.
I'd really like access to SOME resource where I can learn how to construct queries for functions like avg that allegedly take a list expression as an argument, yet barf at every variant I can figure out.
I'm currently working with implementing some MISRA C rules in Sonar. My current rule is to avoid recursion. I started with
//statement[#tokenValue=ancestor::functionDefinition/functionDeclarator/functionName/#tokenValue]
To avoid using the same function name within a function definition, but of course it is possible to use other functions with the same names, but different signatures.
Therefore I've two questions:
Is it possible to find out the method signatures (via built in xpath function, etc.)? Here, i could compare the signature with the call statement.
Is it possible to extend the plugin, as there are MISRA rules where it might be more efficient to go through the abstract syntax tree with the sourcecode?
Thank you really much for your replies:)
(ps :- are there any documentions about the SSLR C toolkit / built in xpath rules?)
The C plugin is deprecated. It has been replaced by the C/C++ plugin. See http://www.sonarsource.com/products/plugins/languages/cpp/.
Before implementing a new coding rule, you should consider whether it is specific to your own context or might benefit others. If it might benefit others, you can propose them on the developer mailing-list. If the SonarQube team find your proposed rules interesting, they may be implemented directly in the related language plugin. It means less maintenance for you, and benefit to others. See http://docs.codehaus.org/display/SONAR/Extending+Coding+Rules
I have not found a clear comparison of what is supported with the NHibernate 3.0 LINQ Provider compared to using the QueryOver syntax. From the surface, it seems like two large efforts into two very similar things.
What are the key trade offs to using each?
LINQ and QueryOver are completely different query methods, which are added to the ones that existed in NHibernate 2 (Criteria, HQL, SQL)
QueryOver is meant as a strongly-typed version of Criteria, and supports mostly the same constructs, which are NHibernate-specific.
LINQ is a "standard" query method, which means the client code can work on IQueryable without explicit references to NHibernate. It supports a different set of constructs; it would be hard to say if there are more or less than with QueryOver.
My suggestion is to learn all the supported query methods, as each use case is different and some work better with one, some work better with other.
I have used both NH-Linq-providers (the old NHContrib for Version 2.1, and also the new for NH3.0) and also used QueryOver. With all the experience made during development of quite complex data-driven applications, I would strongly suggest NOT to use the existing linq-provider with nHibernate if you plan to go behind just basic CRUD-operations!
The current implementation (linq) sometimes produces really unreadable and also unefficient SQL. Especially joining some tables quickly becomes a nightmare if you want to optimize database-performance.
Despite all these drawbacks, I did never encounter wrong queries.
So if you don't care about performance and are already familiar with LINQ, then go for NH-Linq. Otherwise QueryOver is your realiable and typesafe friend.
LINQ to NHibernate (as of version 3.0) does not support the .HasValue property on Nullable types. One must compare to null in queries.
I started to use NH-Linq, because i was already done with LinqToSql and Entity Framework. But, for more complex queries, i have always finished with QueryOver. Reasons:
It's happen that query with NH-Linq doesn't work as expected. I can't remember exactly, but it doesn't work correct with some complex queries. Seems that is too young. And as dlang stated in previous answer, it's produce unefficient SQL.
When you learn QueryOver, it's easy to call functions, do projections, subqueries, seems to me more easy then with NH-Linq.
Good thing for NH-Linq - it can be extended, like Fabio Maulo explained here. But, similar is quite possible with QueryOver, but not so fancy as with NH-Linq :)
I am currently using a CMS which uses an ORM with its own bespoke query language (i.e. with select/where/orderby like statements). I refer to this mini-language as a DSL, but I might have the terminology wrong.
We are writing controls for this CMS, but I would prefer not to couple the controls to the CMS, because we have some doubts about whether we want to continue with this CMS in the longer term.
We can decouple our controls from the CMS fairly easily, by using our own DAL/abstraction layer or what not.
Then I remembered that on most of the CMS controls, they provide a property (which is design-time editable) where users can type in a query to control what gets populated in the data source. Nice feature - the question is how can I abstract this feature?
It then occurred to me that maybe a DSL framework existed out there that could provide me with a simple query language that could be turned into a LINQ expression at runtime. Thus decoupling me from the CMS' query DSL.
Does such a thing exist? Am I wasting my time? (probably the latter)
Thanks
this isn't going to answer your question fully, but there is an extension for LINQ that allows you to specify predicates for LINQ queries as strings called Dynamic LINQ, so if you want to store the conditions in some string-based format, you could probably build your language on top of this. You'd still need to find a way to represent different clauses (where/orderby/etc.) but for the predicates passed as arguments to these, you could use Dynamic LINQ.
Note that Dynamic LINQ allows you to parse the string, but AFAIK doesn't have any way to turn existing Expression tree into that string... so there would be some work needed to do that.
(but I'm not sure if I fully understand the question, so maybe I'm totally of :-))