I'm receiving a JSON array over web sockets, and I have a buffer which is built up, currently I'm assuming that I'll receive the complete JSON array, with nested JSON objects and arrays. But I know from experience that often these strings can be broken up. So I need to check for a valid JSON array before I should try to process the text. Obviously due to the nested arrays, I can't just check for the presence of an opening and a closing square bracket.
I'm using libcereal to parse the JSON, but I can't see anything in the documents to suggest that it can do this. What would people suggest would be the simplest thing? A C++11 regex for JSON?
Related
Parsing large amounts of XML can result in many duplicate strings, such as with element names.
Is there a way to instruct Nokogiri to freeze all strings it returns in the parsed document? This might enable the runtime to coalesce all uses of a given piece of text into a single object in memory, right?
I'm intending to rewrite a game server I already have working in PHP-cli.
As I cannot touch the client, I have to use the on-wire protocol as-is. This is a pure binary format, with multiple fields per packet. I use a modified version of PHP's pack/unpack commands to convert to and from this. A packet is generally in the form:
header:
unpack('nitem/ncmdNum/NdataLen', $buf);
data:
any of several dozen subsequent unpack strings, as identified by cmdNum. e.g.:
a32first_name/a32second_name/a32third_name/nfirst_itemid/nfirst_flags/nfirst_level/nunused/Nfirst_perm_flags/Ncvsize/a{$cvsize}contentsVector
nsuccess/ndummyfielda/a*xerror_msg/a*xtext/NcurrentVersion/NtimeLimit/a*xlimitValue/a*xlimit
[Where a{$cvsize} means a fixed length string determined by the named (usually immediately previous) value, and a* means a zero-terminated variable-length string.. ]
The current PHP implementation unpacks this and calls the a function that deals with command 'cmdNum' passing an associative array containing the unpacked data. This in turn calls the sending code with a similar array for the return values.
Whilst I'm sure I could map these to structures, reading from the input (and writing to it) wouldn't simply be a matter of dropping the buffer over the struct. Plus, most packet types are only used once, in a function dedicated to dealing with that message, so coding up several dozen structures, and the code to deal with loading each field individually, seems like a lot of work.
Is there any method or package that I can use as the basis for dealing with this sort of thing? My searching for "php unpack in go" only seems to return results based on people unpacking a single numerical value, which is obviously easy enough to replace with encoding/binary!
The unpack/pack strings are auto-generated by some other PHP based on a specification grabbed from the client, so I could change that to create a different format fairly easily, if there is something I can use. I'd normally have no issues with writing my own functions to do this sort of thing, but being totally new to Go, this might be too much off-the-bat.
The following question is more complex than it may first seem.
Assume that I've got an arbitrary JSON object, one that may contain any amount of data including other nested JSON objects. What I want is a cryptographic hash/digest of the JSON data, without regard to the actual JSON formatting itself (eg: ignoring newlines and spacing differences between the JSON tokens).
The last part is a requirement, as the JSON will be generated/read by a variety of (de)serializers on a number of different platforms. I know of at least one JSON library for Java that completely removes formatting when reading data during deserialization. As such it will break the hash.
The arbitrary data clause above also complicates things, as it prevents me from taking known fields in a given order and concatenating them prior to hasing (think roughly how Java's non-cryptographic hashCode() method works).
Lastly, hashing the entire JSON String as a chunk of bytes (prior to deserialization) is not desirable either, since there are fields in the JSON that should be ignored when computing the hash.
I'm not sure there is a good solution to this problem, but I welcome any approaches or thoughts =)
The problem is a common one when computing hashes for any data format where flexibility is allowed. To solve this, you need to canonicalize the representation.
For example, the OAuth1.0a protocol, which is used by Twitter and other services for authentication, requires a secure hash of the request message. To compute the hash, OAuth1.0a says you need to first alphabetize the fields, separate them by newlines, remove the field names (which are well known), and use blank lines for empty values. The signature or hash is computed on the result of that canonicalization.
XML DSIG works the same way - you need to canonicalize the XML before signing it. There is a proposed W3 standard covering this, because it's such a fundamental requirement for signing. Some people call it c14n.
I don't know of a canonicalization standard for json. It's worth researching.
If there isn't one, you can certainly establish a convention for your particular application usage. A reasonable start might be:
lexicographically sort the properties by name
double quotes used on all names
double quotes used on all string values
no space, or one-space, between names and the colon, and between the colon and the value
no spaces between values and the following comma
all other white space collapsed to either a single space or nothing - choose one
exclude any properties you don't want to sign (one example is, the property that holds the signature itself)
sign the result, with your chosen algorithm
You may also want to think about how to pass that signature in the JSON object - possibly establish a well-known property name, like "nichols-hmac" or something, that gets the base64 encoded version of the hash. This property would have to be explicitly excluded by the hashing algorithm. Then, any receiver of the JSON would be able to check the hash.
The canonicalized representation does not need to be the representation you pass around in the application. It only needs to be easily produced given an arbitrary JSON object.
Instead of inventing your own JSON normalization/canonicalization you may want to use bencode. Semantically it's the same as JSON (composition of numbers, strings, lists and dicts), but with the property of unambiguous encoding that is necessary for cryptographic hashing.
bencode is used as a torrent file format, every bittorrent client contains an implementation.
This is the same issue as causes problems with S/MIME signatures and XML signatures. That is, there are multiple equivalent representations of the data to be signed.
For example in JSON:
{ "Name1": "Value1", "Name2": "Value2" }
vs.
{
"Name1": "Value\u0031",
"Name2": "Value\u0032"
}
Or depending on your application, this may even be equivalent:
{
"Name1": "Value\u0031",
"Name2": "Value\u0032",
"Optional": null
}
Canonicalization could solve that problem, but it's a problem you don't need at all.
The easy solution if you have control over the specification is to wrap the object in some sort of container to protect it from being transformed into an "equivalent" but different representation.
I.e. avoid the problem by not signing the "logical" object but signing a particular serialized representation of it instead.
For example, JSON Objects -> UTF-8 Text -> Bytes. Sign the bytes as bytes, then transmit them as bytes e.g. by base64 encoding. Since you are signing the bytes, differences like whitespace are part of what is signed.
Instead of trying to do this:
{
"JSONContent": { "Name1": "Value1", "Name2": "Value2" },
"Signature": "asdflkajsdrliuejadceaageaetge="
}
Just do this:
{
"Base64JSONContent": "eyAgIk5hbWUxIjogIlZhbHVlMSIsICJOYW1lMiI6ICJWYWx1ZTIiIH0s",
"Signature": "asdflkajsdrliuejadceaageaetge="
}
I.e. don't sign the JSON, sign the bytes of the encoded JSON.
Yes, it means the signature is no longer transparent.
JSON-LD can do normalitzation.
You will have to define your context.
RFC 7638: JSON Web Key (JWK) Thumbprint includes a type of canonicalization. Although RFC7638 expects a limited set of members, we would be able to apply the same calculation for any member.
https://www.rfc-editor.org/rfc/rfc7638#section-3
What would be ideal is if JavaScript itself defined a formal hashing process for JavaScript Objects.
Yet we do have RFC-8785 JSON Canonicalization Scheme (JCS) which hopefully can be implemented in most libs for JSON and in particular added to popular JavaScript JSON object. With this canonicalization done it is just a matter of applying your preferred hashing algorithm.
If JCS is available in browsers and other tools and libs it becomes responsible to expect most JSON on-the-wire to be in this common canonicalized form. Common consistent application and verification of standards like this can go some way to pushing back against trivial security threats by low skilled actors.
I would do all fields in a given order (alphabetically for example). Why does arbitrary data make a difference? You can just iterate over the properties (ala reflection).
Alternatively, I would look into converting the raw json string into some well defined canonical form (remove all superflous formatting) - and hashing that.
We encountered a simple issue with hashing JSON-encoded payloads.
In our case we use the following methodology:
Convert data into JSON object;
Encode JSON payload in base64
Message digest (HMAC) the generated base64 payload .
Transmit base64 payload .
Advantages of using this solution:
Base64 will produce the same output for a given payload.
Since the resulting signature will be derived directly from the base64-encoded payload and since base64-payload will be exchanged between the endpoints, we will be certain that the signature and payload will be maintained.
This solution solve problems that arise due to difference in encoding of special characters.
Disadvantages
The encoding/decoding of the payload may add overhead
Base64-encoded data is usually 30+% larger than the original payload.
I keep running into a certain kind of data structure, and wonder if there is a name for it. It maps very closely to JSON, but not exactly. The rules are:
It is composed entirely of maps, arrays, and primitives.
It is hierarchical. Maps contain name/value pairs, where a value can
be another map, an array, or a primitive. Arrays contain values with the same rules.
The top level is always a map.
The primitives are strings, integers, floats, booleans, and possibly
dates.
Sometimes the map is just an unordered hash, and sometimes the order
of the name/value pairs matter.
This is a really, really useful structure. You can use it to represent documents, database records, various messages, http requests, lots of stuff. I've run into it in Freemarker (as the 'data model'), Mongo, and anything that uses JSON.
It's not really JSON, because that's a file format, not a specification for a particular data structure. It's not an "object", because object trees can point to other things, like streams and functions. It's not a DOM.
What is it?
Around the office, we've started to call it a "garg", for "generalized argument".
It's not really JSON, because that's a file format, not a specification for a particular data structure.
It might not be JSON (since the specs include syntax rules), but your structure definition defines the same data structure as JSON does.
I don't think it's useful to name this structure. When you are talking about data, just call it data. When you need to interchange data you need a data-interchange format. Now JSON proves to be one damn good one.
JSON isn't just a file format. JSON is also a data structure.
From JSON.org
JSON is built on two structures:
A collection of name/value pairs. In various languages, this is
realized as an object, record, struct, dictionary, hash table, keyed
list, or associative array.
An ordered list of values. In most
languages, this is realized as an array, vector, list, or sequence.
These are universal data structures.
It is a generic data storage structure that carries around hierarchical data. I don't have a generic name for it, but if I were to implement such a beast in, say, C++, I'd probably call the abstract base class a Variant, and name the concrete types by their names: Integer, Array, Map, etc. I'd chuck them in a namespace that would relate to where I'd use them - or maybe I'd prefix the types themselves. I've seen such structures used as well, but I don't know if there is a name that I'd recognize. A DataStore, Environment, StorageBin, or anything that is generic and implies storage of data would do.
I don't see myself calling such a class hierarchy JSON, though. I would provide a JsonSerializer or some such to map this data to JSON, if I needed it.
It sounds like you're describing an associative array, with optional ordering.
That's what JSON represents, except that (I believe) JSON doesn't impose an ordering requirement. Naturally, many other representations also describe associative arrays, which is why JSON is a popular text serialization.
Update 1: JSON isn't properly an associative array. It is a description of object properties. Because it is very often construed as an associative array, many people make the same mistake I did. In fact, "object notation" is the proper name for it - surprise, surprise. :) In addition, JSON isn't a file format - it's a text serialization or markup language, which is different from a file format.
The structure is a tree with different kinds of values stored at its leafs.
In Boost, a similar structure is called Property Tree.
I am reading some data from an XML webservice with Ruby, something like this:
<phrases>
<phrase language="en_US">¡I'm highly annoyed with character references!</phrase>
</phrases>
I'm parsing the XML and grabbing an array of phrases. As you can see, the phrase text contains some XML character entity references. I'd like to replace them with the actual character being referenced. This is simple enough with the numeric references, but nasty with the XML and HTML ones. I'd like to avoid having a big hash in my code that holds the character for each XML or HTML character reference, i.e. http://www.java2s.com/Code/Java/XML/Resolvesanentityreferenceorcharacterreferencetoitsvalue.htm
Surely there's a library for this out there, right?
Update
Yes, there is a library out there, and it's called HTMLEntities:
: jmglov#laurana; sudo gem install htmlentities
Successfully installed htmlentities-4.2.4
: jmglov#laurana; irb
irb(main):001:0> require 'htmlentities'
=> []
irb(main):002:0> HTMLEntities.new.decode "¡I'm highly annoyed with character references!"
=> "¡I'm highly annoyed with character references!"
REXML can do it, though it won't handle "¡" or " ". The list of predefined XML entities (aside from Unicode numeric entities) is actually quite small. See http://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references
Given this input XML:
<phrases>
<phrase language="en_US">"I'm highly annoyed with character references!©</phrase>
</phrases>
you can parse the XML and the embedded entities like this (for example):
require 'rexml/document'
doc = REXML::Document.new(File.open('/tmp/foo.xml').readlines.join(''))
phrase = REXML::XPath.first(doc, '//phrases/phrase')
text = phrase.first # Type is REXML::Text
puts(text.value)
Obviously, that example assumes that the XML is in file /tmp/foo.xml. You can just as easily pass a string of XML. On my Mac and Ubuntu systems, running it produces:
$ ruby /tmp/foo.rb
"I'm highly annoyed with character references!©
This isn't an attempt to provide a solution, it's to relate some of my own experiences dealing with XML from the wild. I was using Perl at first, then later using Ruby, and the experiences are something you can encounter easily if you grab enough XML or RDF/RSS/Atom feeds.
I've often seen XML CDATA contain HTML, both encoded and unencoded. The encoded HTML was probably the result of someone doing things the right way, via some API or library to generate XML. The unencoded HTML was probably someone using a script to wrap the HTML with tags, resulting in invalid XML, but I had to deal with it anyway.
I've also seen XML CDATA containing HTML that had been encoded multiple times, requiring me to unencode everything, even after the XML engine had done its thing. Sometimes during an intermediate pass I'd suddenly have non-UTF8 characters in the string along with encoded ones, as a result of someone appending comments or joining multiple HTML streams together that were from different character-sets. For whatever the reason, it was really ugly and caused XML parsing to break or emit a lot of warnings. I'd have to loop over the content, decoding and checking to see if the previous pass was the same as the current decoding pass, and bailing if nothing had changed. There was no guarantee I'd have a string in a valid character-set at the time though, so I'd have to tell iconv to convert it to UTF8 and throw away characters that wouldn't convert cleanly.
Nokogiri can decode the content of a node various ways, by creative use of the to_xml and to_html methods. You can also look at the HTMLEntities gem, Loofah, and others to go after the CDATA contents. Loofah is nice because it's designed to whitelist/blacklist tags you might encounter.
The XML spec is supposed to protect us from such shenanigans, but, as one of my co-workers used to tell me, "We can make it fool-proof, but not damn-fool-proof". People are SO inventive and the specs mean nothing to someone who didn't bother to read them or doesn't care.