Are there any examples / references to see how protobuf data can be validated using json schema?
Apologies if I'm starting off with something too basic...
Protobuf data can be validated using protobuf deserialisers; if data is parsed by the parser that was generated for the message (and is part of the class representing that message), then it's valid data. To generate that parser / class, you'd have started with a protobuf schema and compiled that with protoc.
Generally speaking, I'd say that wanting to validate such data against a json schema is possibly not a good idea. The point is that, to also have a json schema for the same data is to then have "two versions of the truth", which is generally a bad idea. Which one is right; the .proto schema, or the json schema? If I edit one, have I accurately edited the other?
JSON Can Do More Than Protobuf
I can see why you may want to check such data against a json schema. In a json schema you can define things like value and size constraints that cannot be expressed in a protobuf schema. For example, a message field "bearing" might in the application have a limited valid value between 0 and 359. There is no way to implement such a constraint in protobuf, but if expressed in a json schema used to validate json data, the validator would object if "bearing" were set to 412.
So, why not generate code from the json schema? I have tried (some time ago - I'm out of date) code generators for languages like C# using json schema as input, but found the result unsatisfactory (the code generators I tried didn't want to implement all the things in my json schema, e.g. unions). Things may have got a lot better since then.
Is there a Better Solution?
If this is indeed the kind of thing you need to do, then it's likely that choosing protobuf is not ideal for the purpose (due to the lack of constraints in protobuf schema). The question then is, what are the alternatives?
In my experience, if you want to stick to the concept of starting with a schema and generating code, the best I've ever used is ASN.1 (where "best" assumes you're willing to pay for good commercial ASN.1 tools from companies like Objective Systems or Nokalva - I've been a customer of both).
These days, ASN.1 can even serialise to json (or xml in several flavours, or other text and packed / unpacked binary data formats). The ASN.1 schema language does have constraints on sizes of lists and/or values of fields. There is an official translation between ASN.1 schema and XML schema (XSD), with the better ASN.1 tools able to do that translation. There may now be an defined translation between ASN.1 and json schema too (I don't know), plus tools to do that.
The point of that is, with translation tools, one can then say that the ASN.1 schema and XSD (or json) schema are "one single truth" - one being automatically generated from the other which was hand written.
A Good Halfway Hosue?
I notice (from a quick search) that there are various git* projects purporting to translate between protobuf and json schema, which if satisfactory means that your json and protocol buffer schema can be automatically translated between one and the other (which means that my 2nd para above is junk!).
Unless something has happened recently, those json protobuf schema translations are going to be limited, or disappointing. ASN.1, XSD and json schema are broadly similar in terms of what their syntaxes allow to be expressed (including size and value constraints), so translation between them doesn't necessarily lose "information". However, the syntax of protobuf schema is a lot more limited than that of json schema, so a translation from json schema to protobuf might lose the very information that you want.
The good news though would be that the protobuf schema would still be a "form of the truth" having been translated from the json schema. If you were using protobuf to generate json data instead of protobuf binary format data, the "original form of the truth" (the json schema) can be used to validate the protobuf generated json, with constraints on value and size still intact. That would be a good result!
Good luck!
I need to write a XML Parser using Boost Property tree which can replace an existing MSXML DOM Parser. Basically my code should return the list of child nodes, number of child nodes etc. Can this be achieved using Property Tree? Eg. GetfirstChild(),selectNodes(),Getlength()etc.
I saw a lot of APIs related to Boost Property Tree, but the documentation seems to be bare minimum and confusing. As of now, I am able to parse the entire XML using BOOST_FOREACH. But the path to each node is hard coded which will not serve my purpose.
boost::property_tree can be used to parse XML and it's a tree so you can use as XML DOM substitution but the library is not intended to be fully fledged XML parser and it's not complaint with XML standard. For instance it can successfully parse non-wellformed xml input and it doesn't support some of XML features. So it's your choice - if you want simple interface to simple XML configuration then yes, you should use boost::property_tree
What's a good way to transform XML to ruby code? I've got a GraphML file containing information about a graph structure. I want to instantiate a graph from that with ruby objects.
Currently I use XPath to do this in a procedural way. I know, there's also a way to do it with XSLT in a more declarative way.
Do you know other ways? What would you suggest, any experience?
I don't quite understand why you would want to transform the GraphML data into Ruby code, rather than using Ruby to parse the GraphML data into Ruby object instances?
I made this example as an exercise: https://github.com/endymion/GraphML-parsing-exercise
It uses Nokogiri to parse the XML, then XPath to select nodes, then it iterates through the nodes, instantiating Ruby object instances: https://github.com/endymion/GraphML-parsing-exercise/blob/master/parse.rb
Is that roughly what you're looking for?
I keep running into a certain kind of data structure, and wonder if there is a name for it. It maps very closely to JSON, but not exactly. The rules are:
It is composed entirely of maps, arrays, and primitives.
It is hierarchical. Maps contain name/value pairs, where a value can
be another map, an array, or a primitive. Arrays contain values with the same rules.
The top level is always a map.
The primitives are strings, integers, floats, booleans, and possibly
dates.
Sometimes the map is just an unordered hash, and sometimes the order
of the name/value pairs matter.
This is a really, really useful structure. You can use it to represent documents, database records, various messages, http requests, lots of stuff. I've run into it in Freemarker (as the 'data model'), Mongo, and anything that uses JSON.
It's not really JSON, because that's a file format, not a specification for a particular data structure. It's not an "object", because object trees can point to other things, like streams and functions. It's not a DOM.
What is it?
Around the office, we've started to call it a "garg", for "generalized argument".
It's not really JSON, because that's a file format, not a specification for a particular data structure.
It might not be JSON (since the specs include syntax rules), but your structure definition defines the same data structure as JSON does.
I don't think it's useful to name this structure. When you are talking about data, just call it data. When you need to interchange data you need a data-interchange format. Now JSON proves to be one damn good one.
JSON isn't just a file format. JSON is also a data structure.
From JSON.org
JSON is built on two structures:
A collection of name/value pairs. In various languages, this is
realized as an object, record, struct, dictionary, hash table, keyed
list, or associative array.
An ordered list of values. In most
languages, this is realized as an array, vector, list, or sequence.
These are universal data structures.
It is a generic data storage structure that carries around hierarchical data. I don't have a generic name for it, but if I were to implement such a beast in, say, C++, I'd probably call the abstract base class a Variant, and name the concrete types by their names: Integer, Array, Map, etc. I'd chuck them in a namespace that would relate to where I'd use them - or maybe I'd prefix the types themselves. I've seen such structures used as well, but I don't know if there is a name that I'd recognize. A DataStore, Environment, StorageBin, or anything that is generic and implies storage of data would do.
I don't see myself calling such a class hierarchy JSON, though. I would provide a JsonSerializer or some such to map this data to JSON, if I needed it.
It sounds like you're describing an associative array, with optional ordering.
That's what JSON represents, except that (I believe) JSON doesn't impose an ordering requirement. Naturally, many other representations also describe associative arrays, which is why JSON is a popular text serialization.
Update 1: JSON isn't properly an associative array. It is a description of object properties. Because it is very often construed as an associative array, many people make the same mistake I did. In fact, "object notation" is the proper name for it - surprise, surprise. :) In addition, JSON isn't a file format - it's a text serialization or markup language, which is different from a file format.
The structure is a tree with different kinds of values stored at its leafs.
In Boost, a similar structure is called Property Tree.
How would I create a list of elements in VB.NET, save it to a .dat file, and make Ruby re-create such list (as an array) with such elements (they will be strings, booleans and integers)?
You can do it, but you'd need to find some representation for it. The easiest is probably JSON, so you would
make the data structure in VB
write it to JSON as a file
read the JSON file using Ruby.
Here's a JSON serializer for .Net:
A .dat file is just a binary blob, 'tis it not? If there's any particular format you use you could easily translate that to equivalent Ruby code. Just as long as the knowledge is duplicated on both ends, though that leads to a violation of the DRY principle. JSON might be a good intermediate representation (as noted by #Charlie Martin) because it's a plain text format and you can always add compression.