gson vs protocol buffer - protocol-buffers

What are the pros and cons of protocol buffer (protobuf) over GSON?
In what situation protobuf is more appropriate than GSON?
I am sorry for a very generic question.

Both json (via the gson library) and protobuf are portable between platorms; but
protobuf is smaller (bandwidth) and cheaper (CPU) to read/write
json is human readable / editable (protobuf is binary; hard to parse without library support)
protobuf is trivial to merge fragments - just concatenate
json is easily passed to web page clients
the main java version of protobuf needs contract-definition (.proto) and code-generation; gson seems to allow arbitrary pojo usage (there are protobuf implementations that work on such objects, but not for java afaik)
If performance is key : protubuf
For use with a web page (JavaScript), or human readable: json (perhaps via gson)

If you want efficiency and cross-platform you should send raw messages between applications containing the information that is necessary and nothing more or less.
Serialising classes via Java's own mechanisms, gson, protobufs or whatever, creates data that contains not only the information you wish to send, but also information about the logical structures/hierarchies of the data structures that has been used to represent the data inside your application.
This makes those classes and data mapping dual purpose, one to represent the data internally in the application, and two to be transmitted to another application. Those two roles can be conflicting there is an onus on the developer to remember that the classes, collections and data layout he is working with at any time will also be serialised.

Related

protocol buffers in web application architecture -- when are they not worth the trouble?

I am new to web development, and I've seen many sites preaching the benefits of using protocol buffers -- for example: https://codeclimate.com/blog/choose-protocol-buffers/.
I'm not sure if some of the benefits apply to my use case:
Having a unified schema out of the .proto file: If I validate my data in the front and back-end, which I should do anyway, a unified schema is enforced explicitly. I don't see any added benefit in this regard from using protocol buffers.
Auto generating the setters and getters from the .proto file: This looks like a nice selling point. But, I wouldn't need any setters and getters if I don't use protocol buffers in the first place. I found them really cumbersome to work with:
They remove capitalization, which alters the original variable names
They are unnatural to work with. For example, in c++ I would want work with just a plain old data structure, but instead I have to do something like ptr_message->shouldBeStruct1().shouldBeStructArray(20).shouldBeInt();
Easy Language Interoperability: I really doubt it is good practice to design my data consuming code so that it works for a protobuf message rather than a struct. So, I would need to parse the protobuf into a plain data struct first.
The only potential benefit I see is the reduced data size when transmitting on the wire. But, does this really justify the overhead of additional middleware to work with protocol buffers? What am I missing?

Hadoop Text class

I want to ask why the Hadoop Framework, which implements the MapReduce distributed programming paradigm, uses a Text class to store a String when Java already has Strings implemented for us to use? It seems unnecessarily redundant (lol).
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Text.html
They have implemented their own class Text for String, LongWritable for Long, IntWritable for Integers.
Purpose behind adding these class is to define their own basic types for optimized network serialization. These are found in the org.apache.hadoop.io package.
This types produces a compact serialized object to makes best use of network bandwidth. And Hadoop is meant to process big data so network bandwidth is the most precious resource they want to use in very effective way. Plus for this class they have reduced the overhead of serialization and deserialization of these object as compared to Java's native types.
Redundant???
Let me shed some light. When we talk about distributed systems efficient Serialization/Deserialization plays a vital role. It appears in two quite distinct areas of distributed data processing :
IPC
Persistent Storage
To be specific to Hadoop, IPC between nodes is implemented using RPCs. The RPC protocol uses serialization to render the message into a binary stream to be sent to the remote node, which then deserializes the binary stream into the original message. So, it is very important to have a solid Serialization/Deserialization framework in order to store and process huge amounts of data efficiently. In general, it is desirable that an RPC serialization format is:
Compact
Fast
Extensible
Interoperable
Hadoop uses its own types because developers wanted the storage format to be compact (to make efficient use of storage space), fast (so the overhead in reading or writing terabytes of data is minimal), extensible (so we can transparently read data written in an older format), and interoperable (so we can read or write persistent data using different languages).
Few points to remember before thinking that having dedicated MapReduce types is redundant :
Hadoop’s Writable-based serialization framework provides a more efficient and customized serialization and representation of the data for MapReduce programs than using the general-purpose Java’s native serialization framework.
As opposed to Java’s serialization, Hadoop’s Writable framework does not write the type name with each object expecting all the clients of the serialized data to be aware of the types used in the serialized data. Omitting the type names makes the serialization process faster and results in compact, random accessible serialized data formats that can be easily interpreted by non-Java clients.
Hadoop’s Writable-based serialization also has the ability to reduce the object-creation overhead by reusing the Writable objects, which is not possible with the Java’s native serialization framework.
HTH
Why can't I use the basic String or Integer classes?
Integer and String implement the standard Serializable-interface of Java . The problem is that MapReduce serializes/deserializes values not utilizing this standard interface but rather an own interface, which is called Writable.
The key and value classes have to be serializable by the framework and hence need to implement
the Writable interface. Additionally, the key classes have to implement the WritableComparable
interface to facilitate sorting by the framework.
Here is the link to MapReduce Tutorial

Technology for database access system

I am currently designing system which should allow access to database. Assumptions are as follows:
Database should has access layer. The access layer should provide objects that represents database tables. (This would be done using some ORM framework).
Client which want to get data from database, should get object from access layer first, and then get data using those objects.
Clients could use Python, Java or C++.
Access layer is based on Java.
There won't be to many clients, but they will be opearating on large amounts of data.
The question which is hard for me is what technology should be used for passing object between acces layer and clients. I consider using ZeroC ICE, Apache Thrift or Google Protocol Buffers.
Does anyone have opinion which one is worth using?
This is my research for Protocol Buffers:
Advantages:
simple to use and easy to start
well documented
highly optimized
defining object data structure in java-like language
automatically generating implementation of setters and getters and build methods for Python, Java and C++
open-source bidnings for other languages
object could be extended without affecting old version of an applications
there are many of open-source RpcChanel and RpcController implementation (not tested)
Disadvantages:
need to implement object transfer
objects structure have to be defined before use, so we can't add some fields on the fly (Updated: there are posibilities to do that, see the comments)
if there is a need for reading one object's filed, we have to parse whole file (in contrast, in XML we could ignore chosen tags)
if we want to use RPC for invoke object methods, we need to define services and deliver RpcChanel and RpcController implementation
This is my research for Apache Thrift:
Advantages:
provide compiler that generates source code for supported languages (classes, all things that are important)
allow defining optional fields in the structures ( when we do not set value on a field, the size of transfered data is lower)
enable point out some methods that are "one way" (returning nothing and client after invokation do not wait for answer from server about completion processing of query)
support collections (maps, lists, sets), objects, primitives serialization (deserialization), constants, enumerations, exceptions
most of problems, errors are solved and explained
provide different methods of serialization: (TBinaryProtocol...) and different ways of exchanging data: (TBufferedTransport, TZlibTransport... )
compiler produces classes (structures) for languages thaw we can extend by adding some new methods.
possible to add fields to protocol(server as well as client) and remove other- old code and new one can properly interact(some rules in update)
enable asynchronous calls
easy to use
Disadvantages:
documentation - contains some errors that sometimes it is really hard to get to know what is the source of the problem
not allways problems are well taged (when we look for solution in the Internet).
not support overloading for service methods
tutorials cover only simple examples of thrift usage
hard to start
ICE ZeroC:
Is better than Protocol Buffers, because I wouldn't need to implement object passing by myself via e.g. sockets. ICE also gives ServantLocators which can provide management of connections.
The question is: whether ICE is much slower and less efficient than the PB?

Google protocol buffers and stl vectors, maps and boost shared pointers

Does google protocol buffers support stl vectors, maps and boost shared pointers? I have some objects that make heavy use of stl containers like maps, vectors and also boost::shared_ptr. I want to use google protocol buffers to serialize these objects across the network to different machines.
I want to know does google protobuf support these containers? Also if I use apache thrift instead, will it be better? I only need to serialize/de-serialize data and don't need the network transport that apache thrift offers. Also apache thrift not having proper documentation puts me off.
Protocol buffers directly handles an intentionally small number of constructs; vectors map nicely to the "repeated" element type, but how this is presented in C++ is via "add" methods - you don't (AFAIK) just hand it a vector. See "Repeated Embedded Message Fields" here for more info.
Re maps; there is no inbuilt mechanism for that, but a key/value pair is easily represented in .proto (typically key = 1, value = 2) and then handled via "repeated".
A shared_ptr itself would seem to have little meaning in a serialized file. But the object may be handled (presumably) as a message.
Note that in the google C++ version the DTO layer is generated, so you may need to map between them and any existing object model. This is usually pretty trivial.
For some languages/platforms there are protobuf variants that work against existing object models.
(sorry, I can't comment on thrift - I'm not familiar with it)

what is a data serialization system?

according to Apache AVRO project, "Avro is a serialization system". By saying data serialization system, does it mean that avro is a product or api?
also, I am not quit sure about what a data serialization system is? for now, my understanding is that it is a protocol that defines how data object is passed over the network. Can anyone help explain it in an intuitive way that it is easier for people with limited distributed computing background to understand?
Thanks in advance!
So when Hadoop was being written by Doug Cutting he decided that the standard Java method of serializing Java object using Java Object Serialization (Java Serialization) didn't meet his requirements for Hadoop. Namely, these requirements were:
Serialize the data into a compact binary format.
Be fast, both in performance and how quickly it allowed data to be transfered.
Interoperable so that other languages plug into Hadoop more easily.
As he described Java Serialization:
It looked big and hairy and I though we needed something lean and mean
Instead of using Java Serialization they wrote their own serialization framework. The main perceived problems with Java Serialization was that it writes the classname of each object being serialized to the stream, with each subsequent instance of that class containing a 5 byte reference to the first, instead of the classname.
As well as reducing the effective bandwidth of the stream this causes problems with random access as well as sorting of records in a serialized stream. Thus Hadoop serialization doesn't write the classname or the required references, and makes the assumption that the client knows the expected type.
Java Serialization also creates a new object for each one that is deserialized. Hadoop Writables, which implement Hadoop Serialization, can be reused. Thus, helping to improve the performance of MapReduce which accentually serializes and deserializes billions of records.
Avro fits into Hadoop in that it approaches serialization in a different manner. The client and server exchange a scheme which describes the datastream. This helps make it fast, compact and importantly makes it easier to mix languanges together.
So Avro defines a serialization format, a protocol for clients and servers to communicate these serial streams and a way to compactly persist data in files.
I hope this helps. I thought a bit of Hadoop history would help understand why Avro is a subproject of Hadoop and what its meant to help with.
If you have to store in a limited file the information like the hierarchy or data structure implementation details and pass that information over a network, you use data serialization. It is close to understanding xml or json format. The benefit is that the information which is translated into any serialization format can be deserialized to regenerate the classes, objects, data structures whatever that was serialized.
actual implementation-->serialization-->.xml or .json or .avro --->deserialization--->imlementation in original form
Here is the link to the list of serialization formats. Comment if you want further information! :)

Resources