I am working on a project where we are using protocol buffers to create and parse some of our messages (protobuf-net). This is so elegant, that I would like to use this same deserialization method to parse other messages emanating from external non-protobuf generated sources. Is this possible?
I would imagine that it could be possible to specify all of the .proto fields to be fixed size (i.e. not like variable ints). The question is then whether you could replace the protobuf headers with other magic numbers or whichever header the 3rd party protocol uses.
If this is a bit confusing, an example may shed some light:
Let's say you buy a fancy toaster that exposes an ethernet port. It supports a proprietary but well documented protocol. Can you burn heart shaped patterns on your toast using protobuf?
At the moment, no: the library is tied to the protobuf wire specification; it does not have support for non-protobuf data.
In a way, it is a bit like asking: "can XmlSerializer read/write json?". It isn't something that is on my list of things to look at, to be honest.
Related
I am working on an ECG module, which is giving out data in bytes. There's a protocol document about it explaining like how to structure the packets, that are coming out of the module.
I want to decode that data.
I am confused whether protocol buffers will help in this or not. Any other methods that would be helpful in this decoding and writing that protocol in Python ?
Protocol buffers only works with its own encoding format.
For decoding a manufacturer-specific custom binary format, I would suggest Python's built-in struct module. Here is a basic tutorial which is easy get started with.
If the manufacturer-specific format is text based instead, you can either use basic string manipulation to split it into tokens, or one of the parsing libraries for Python.
I have a nfc card which supports multiple technologies. (For example NfcA,MifareClassic, ISOdep). I want to understand what technology has been used to write to the tag.
Are these technologies independent? I mean can I use mifareclassic to write some data and use NfcA (or ISOdep) to read that data?
Or these technologies each have their own memory?
I have been reading a lot about this subject last few days but have not found a good reference.
I also did some tests myself. I wrote an android app to write a NdefMessage to a tag. and I found the corresponding data bytes when I used MifareClassic APIs to dump the memory.
I took a look at the code inside MifareClassic library, and found out that all of the relative functions(e.g readblock,writeblock) create a byte array and pass it to transceive(). In android documention it is mentioned that calling MifareClassic.transceive is the same as calling NfcA.transceive.
which is a bit ironic since it is mentioned in NfcA documents that NfcA and MifareClassic are not the same and they have different transmission protocols.
another thing that I realized was that Ndef is not a protocol by itself. it a standard format to store data. apparently Ndef class has different implementations for different tags. on a MifareClassic tag you can only use Ndef class to write data to it if the tag is using the default keys. otherwise you will not be able to write to the tag.
I am new to web development, and I've seen many sites preaching the benefits of using protocol buffers -- for example: https://codeclimate.com/blog/choose-protocol-buffers/.
I'm not sure if some of the benefits apply to my use case:
Having a unified schema out of the .proto file: If I validate my data in the front and back-end, which I should do anyway, a unified schema is enforced explicitly. I don't see any added benefit in this regard from using protocol buffers.
Auto generating the setters and getters from the .proto file: This looks like a nice selling point. But, I wouldn't need any setters and getters if I don't use protocol buffers in the first place. I found them really cumbersome to work with:
They remove capitalization, which alters the original variable names
They are unnatural to work with. For example, in c++ I would want work with just a plain old data structure, but instead I have to do something like ptr_message->shouldBeStruct1().shouldBeStructArray(20).shouldBeInt();
Easy Language Interoperability: I really doubt it is good practice to design my data consuming code so that it works for a protobuf message rather than a struct. So, I would need to parse the protobuf into a plain data struct first.
The only potential benefit I see is the reduced data size when transmitting on the wire. But, does this really justify the overhead of additional middleware to work with protocol buffers? What am I missing?
What are the pros and cons of protocol buffer (protobuf) over GSON?
In what situation protobuf is more appropriate than GSON?
I am sorry for a very generic question.
Both json (via the gson library) and protobuf are portable between platorms; but
protobuf is smaller (bandwidth) and cheaper (CPU) to read/write
json is human readable / editable (protobuf is binary; hard to parse without library support)
protobuf is trivial to merge fragments - just concatenate
json is easily passed to web page clients
the main java version of protobuf needs contract-definition (.proto) and code-generation; gson seems to allow arbitrary pojo usage (there are protobuf implementations that work on such objects, but not for java afaik)
If performance is key : protubuf
For use with a web page (JavaScript), or human readable: json (perhaps via gson)
If you want efficiency and cross-platform you should send raw messages between applications containing the information that is necessary and nothing more or less.
Serialising classes via Java's own mechanisms, gson, protobufs or whatever, creates data that contains not only the information you wish to send, but also information about the logical structures/hierarchies of the data structures that has been used to represent the data inside your application.
This makes those classes and data mapping dual purpose, one to represent the data internally in the application, and two to be transmitted to another application. Those two roles can be conflicting there is an onus on the developer to remember that the classes, collections and data layout he is working with at any time will also be serialised.
Does google protocol buffers support stl vectors, maps and boost shared pointers? I have some objects that make heavy use of stl containers like maps, vectors and also boost::shared_ptr. I want to use google protocol buffers to serialize these objects across the network to different machines.
I want to know does google protobuf support these containers? Also if I use apache thrift instead, will it be better? I only need to serialize/de-serialize data and don't need the network transport that apache thrift offers. Also apache thrift not having proper documentation puts me off.
Protocol buffers directly handles an intentionally small number of constructs; vectors map nicely to the "repeated" element type, but how this is presented in C++ is via "add" methods - you don't (AFAIK) just hand it a vector. See "Repeated Embedded Message Fields" here for more info.
Re maps; there is no inbuilt mechanism for that, but a key/value pair is easily represented in .proto (typically key = 1, value = 2) and then handled via "repeated".
A shared_ptr itself would seem to have little meaning in a serialized file. But the object may be handled (presumably) as a message.
Note that in the google C++ version the DTO layer is generated, so you may need to map between them and any existing object model. This is usually pretty trivial.
For some languages/platforms there are protobuf variants that work against existing object models.
(sorry, I can't comment on thrift - I'm not familiar with it)