protocol buffers is faster than flatbuffer in deserialization,but it waste losts of time to write code to convert a protocol buffers serialization binary string to flatbuffer serialization binary string, is there any tools can help do it, flatbuffer idl can be the same as protobuffer idl.
Related
I need to convert data from ASCII to EBCDIC in an Informatica Transformation. I have attempted to use CONVERT_BASE in an expression using string datatypes for the currency data, but received a non-fatal error.
I've also googled a fair amount and have been unable to find a solution.
Has anyone encountered and been successful in a situation like this?
In Complex Data Exchange, you do not require a transformer to convert ASCII to EBCDIC format.
To change the encoding from ASCII to EBCDIC format, do the following:
LaunchContentMaster Studio
Go toProject > Properties > Encoding
Change the output encoding toEBCDIC-37 and the byte order toBigEndian.
Just in case if you need to transfer flat file from mainframe (EBCDIC) to Linux (ASCII) and preserve packed decimal fields / COMP-3 (i.e. do not unpack COMP-3); You can use a combination of PWX for VSAM/Sequential on the mainframe source, and PWX for Flat Files on the Linux machine for the target.
Create appropriate datamaps for the source, and for the target.
On the source side, use datamap functions to create a new field for each of the packed fields, as an unpacked value.
In the mapping, bring in the unpacked value ports, not the packed ones, as numerics.
In the datamap for the target, create only the packed fields.
In the mapping, map the (unpacked) numerics to packed numerics fields
PWX should handle the conversions for you.
Note that this includes operations on packed fields, so some signs may get converted from F to C.
I have to use xml to send utf-16(Ex: 0x123,0x145) data. I am new to xml and we are using libxml2 library. I am able to add string data(Ex: 123456) and get that child node data. I am struck how to send a unsgined short int data through xml. I am using GCC compiler and libxml2 library on ubuntu machine.
XML only supports textual data and no binary data. This means that you have to serialize numbers as strings when writing XML, for example in decimal or hexadecimal notation.
I want to store large amount of data in a protobuf format in which include time-stamp parameter. And I want to retrieve the data based on the time-stamp value.
Thanks.
Protobuf is a sequential-access format. There's no way to jump into the middle of a message looking for data; you have to parse through the whole thing.
Some options:
Devise a framing format that allows you to break up your datastore into many small chunks, each of which is a separate protobuf message. This is a fairly large project.
Use SQLite or even an actual database.
Use a random-access-fieldly format like Cap'n Proto instead. (Disclosure: I'm the author of Cap'n Proto, and also of Protobufs v2 (Google's open source release).)
For instance, we need a third party lib to parse and get a file meta data. But the method will decode all the meta data via utf-8, even if the meta data is encoded in another encoding, it will return us a utf-8 encoded string. And the lib doesn't support any method to return a raw string data for us to encode it correctly. Now we know the file's original encoding of the meta data is, for example, GBK. Is there a way to correct the utf-8 encoded string to GBK?
No there isn't, decoding something as UTF-8 that isn't in UTF-8 is lossy. That means, by the time you get the string from the lib, you have lost information and can't represent the original data as GBK. Change how the lib works, or change the file meta data to UTF-8.
Yes. You should learn about ruby 1.9's force_encoding and encode methods on the string class. I recommend converting everything to actually be UTF-8 as soon as possible before manipulating it in ruby.
I am using the thrift ruby gem and am doing the following
serializer = Thrift::Serializer.new()
binary_string=serializer.serialize(my_thrift_obj)
and I am storing this binary_string in a file, but I noticed there is no compression at all. Is there any way I can compress my_thrift_obj while serializing?
Also, is there a way to serialize arbitrary ruby hashes to thrift objects?
I got the following reply from the thrift author Mark Slee.
The compact protocol doesn't do compression, the word compact refers
to the way it encodes structure and type metadata.
Thrift is intended for strongly-typed structured data serialization,
not compression. A file is already serialized - it sounds like what
you really want is to compress serialized data. Would recommend using
zlib or gzip for that.