Protocol Buffers - Best practice for repeated boolean values - protocol-buffers

I need to transfer some data over a relative slow (down to only 1Kb/s) connection. I have read that the encoding of Googles protocol buffers is efficient.
Thats true for most of my data, but not for boolean values, especialy if it is a repeated field.
The problem is that I have to transfer, beside other data, a specified number (15) of boolean values every 50 milliseconds. Protobuf is encoding each boolean value into one byte for the field ID and one byte for the boolean value (0x00 or 0x01) which results in 30 bytes of data for 15 boolean values.
So I am searching for a better way of encoding this now. Anybody also had this problem already? What would be the best practice to reach a efficient encoding for this situation?
My idea was to use a numbered data type (uint32) and manual encode the data, for every bool one bit of the integer. Any feedback about this idea?

In Protobuf, your best bet is to use an integer bitfield. If you have more than 64 bits, use a bytes field (and pack the bits manually).
Note that Cap'n Proto will pack boolean values (in both structs and lists) as individual bits, and so may be worth looking at.
However, if you are extremely bandwidth-constrained, it may be best to develop your own custom protocol. Most of these serialization frameworks trade-off a little bit of space for ease of use (especially when it comes to dealing with version skew), but if your case it may be more important to focus solely on size. A custom message format that just contains some bits should be easy enough to maintain and can be packed as tightly as you want.
(Disclosure: I am the author of Cap'n Proto, as well as most of Google's open source Protobuf code.)

Related

how to add signature to protobuf messges?

Is there a common way to sign protobuf messages? what I can imagine is to Add a data field and a signature field in a message, and use SerializeToArray(in cpp) or ToByteArray(in c#) to get raw bytes, and then use md5 or sha256 .. etc to calculate the hash value, then assign the hash value to the field 'sign'. Bue I don't know if there is any different with the raw bytes between different languages, or in proto2 and proto3?
The approach you discuss for signing is fine for integrity validation purposes, as long as your hashing algorithm is strong enough. If it is for anything stronger than an integrity checksum, you should probably use a true cryptographic hash (with public+private keys), as anyone can otherwise sign their own arbitrary payload, defeating the point.
You also seen to discuss determinism. The raw bytes in protobuf are not entirely deterministic. There are multiple valid ways of representing the same payload in protobuf, including:
reordering fields (numerical order is a "should", not a "must")
including or omitting zeros (different between proto2 and proto3)
packed vs sequential "repeated" encoding
the reality that "map" is usually backed by some platform-specific inbuilt map/dictionary type, which commonly do not define order, so in theory it can vary every time
not really an issue in reality, but in theory you can encode a varint with an arbitrary length (up to 10 bytes) simply by including unnecessary groups of zero bytes; similar to in text (JSON, etc) saying that 42, 042, 0042 and 0000000042 all represent the same integer; nobody does that, but: it would be valid

What for people sometimes convert numbers or strings to bytes?

Sometimes I encounter questions about converting sth to bytes. Are anything existing where it is vitally important to convert to bytes or what for could I convert sth to bytes?
In most languages the most common string functions come as part of the language or in a library/include/import that comes pre-made, often employing object code to take advantage of processor based strings functions, however, sometimes you need to do something with a string that isnt natively supported by the language so since 8-bit days, people have viewed strings as an array of 7 or 8-bit characters, which fit within a byte and use conventions like ASCII to determine which byte value represents which character.
While standard languages often have functions like "string.replaceChar(OFFSET,'a')" this methodology can be painstaking slow because each call to the replaceChar method results in processing overhead which may be greater than the processing needing to be done.
There is also the simplicity factor when designing your own string algorithms but like I said, most of the common algorithms come prebuilt in modern languages. (stringCompare, trimString, reverseString, etc).
Suppose you want to perform an operation on a string which doesnt come as standard.
Suppose you want to add two numbers which are represented in decimal digits in strings and the size of these numbers are greater than the 64-bit bus size of the processor? The RSA encryption/descryption behind the SSL browser padlocks employs the use of numbers which dont fit into the word size of a desktop computer but none the less the programs on a desktop which deal with RSA certificates and keys must be able to process these data which are actually strings.
There are many and varied reasons you would want to deal with string, as an array of bytes but each of these reasons would be fairly specialised.

When to use fixed value protobuf type? Or under what scenarios?

I want to transfer a serialized protobuf message over TCP and I've tried to use the first field to indicate the total length of the serialized message.
I know that the int32 will change the length after encoding. So, maybe a fixed32 is a good choice.
But at last of the Encoding chapter, I found that I can't depend on it even if I use a fixed32 with field_num #1. Because Field Order said that the order may change.
My question is when do I use fixed value types? Are there any example scenarios?
"My question is when do I use fixed value types?"
When it comes to serializing values, there's always a tradeoff. If we look at the Protobuf-documentation, we see we have a few options when it comes to 32-bit integers:
int32: Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.
uint32: Uses variable-length encoding.
sint32: Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.
fixed32: Always four bytes. More efficient than uint32 if values are often greater than 2^28.
sfixed32: Always four bytes.
int32 is a variable-length data-type. Any information that is not specified in the type itself, needs to be expressed somehow. To deserialize a variable-length number, we need to know what the length is. That is contained in the serialized message as well, which requires additional storage space. The same goes for an optional negative sign. The resulting message may be smaller because of this, but may be larger as well.
Say we have a lot of integers between 0 and 255 to encode. It would be cheaper to send this information as a two bytes (one byte with that actual value, and one byte to indicate that we just have one byte), than to send a full 32-bit (4 bytes) integer [fictional values, actual implementation may differ]. On the other hand, if we want to serialize a large value, that can only fit in 4 bytes the result may be larger (4 bytes and an additional byte to indicate that the value is 4 bytes; a total of 5 bytes). In this case it will be more efficient to use a fixed32. We simply know a fixed32 is 4 bytes; we don't need to serialize that fixed32 is a 4-byte number.
And if we look at fixed32 it actually mentions that the tradeoff point is around 2^28 (for unsigned integers).
So some types are good [as in, more efficient in terms of storage space] for large values, some for small values, some for positive/negative values. It all depends on what the actual values represent.
"Are there any example scenarios?"
32-bit hashes (ie: CRC-32), IPv4 addresses/masks. A predictable message sizes could be relevant.

Most efficient barcode to store a GUID

I have a system that I'm working on at the moment that requires users to log into the system, and the client wants to use a barcode scanner and cards to keep prices down. (Yes username and password cheaper, but she wants a card type solution so she gets one.)
All my data uses GUIDs as key fields, so I'd like to store the GUID directly on the card in the barcode. While its simple enough to code it as 3 of 9 its not going to be the most efficient use of space.
Is there a best practice or most efficient method for storing GUIDs in a barcode? I'd have assumed that since there's a consistent length, and depth to the data there would be a standard, but I can't find it. Would be easy enough to generate my own - control char either end and then binary data between, but would like something that standard readers will know how to interpret.
Any help gratefully received.
There are no open standards for special-purpose data compaction with generic linear barcodes such as Code 39 and Code 128. Most ISO/IEC-standardised 2D barcodes do support a special-purpose data encoding mechanism called Extended Channel Interpretation (ECI) which allows you to specify that data conforms to a certain application standard or encoding regime, for example ECI 298765 for IPv4 address compaction [*]. Unfortunately GUID compaction isn't amongst those that have been registered and even if it were you would nevertheless need to handle this within your application as reader support would be lacking.
That leaves you with having to pre-encode (and subsequently decode) the GUID into a format that can be handled efficiently by some ubiquitous barcode symbology.
An efficient way to store a GUID would be to convert it to a 40-digit[†] decimal representation and store the result in a Code 128 barcode using double-density numeric compression ("Mode C").
For example, consider the GUID:
cd171f7c-560d-4a62-8d65-16b87419a58c
Expressed as a hexadecimal number:
0xCD171F7C560D4A628D6516B87419A58C
Converted to 40 decimal digits:
0272611800569275698104677545117639878028
Encoded within a Code 128 barcode:
Your application would of course need to recognise this input as a decimal-encoded GUID and reverse the above process but I doubt that a significantly more efficient approach exists that doesn't require you to transform the data into an unusual radix and then deal with the complexities of handling ASCII control characters at scan time.
[*] The register of assigned ECI codes is available from the AIM store as "ECI Part 3: Register".
[†] Whilst it is possible to store the entire GUID range within 39 digits a 39-digit Mode C Code 128 symbol is in fact longer than a 40-digit symbol.

What is a good way to deal with byte alignment and endianess when packing a struct?

My current design involves communication between an embedded system and PC, where I am always buzzed by the struct design.
The two systems have different endianess that I need to deal with. However, I find that I cannot just do a simple byte-order switch for every 4 bytes to solve the problem. It turns out to depend on the struct.
For example, a struct like this:
{
uint16_t a;
uint32_t b;
}
would result in padding between a and b. Eventually, the endian switch has to be specific to a and b because the existence of the padding bytes. But it looks ugly because I need to change the endian switch logic every time I change the struct content.
What is a good strategy to arrange elements in a struct when padding comes in? Should we try to rearrange the elements so that there is only padding bytes at the end of the struct?
Thanks.
I'm afraid you'll need to do some more platform-neutral serialization, since different architectures have different alignment requirements. I don't think there is a safe and generic way to do something like grabbing a chunk of memory and sending it to another architecture where you just place it at some address and read from it (the correct data). Just convert and send the elements one-by-one - you can push the values into a buffer, that will not have any padding and you'll know exactly what is where. Plus you decide which part will do the conversions (typically the PC has more resources to do that). As a bonus you can checksum/sign the communication to catch errors/tampering.
BTW, afaik while the compiler keeps the order of the variables intact, it theoretically can put some additional padding between them (e.g. for performance reasons), so it's not just an architecture related thing.

Resources