What is the offset size in FlatBuffers? - data-structures

The major part of our data is strings with possible substring duplication (eg. domains - "some.thing.com" and "thing.com"). We'd like to reuse the substrings to reduce file size and memory consumption with FlatBuffers, so i'm planning to use [string] as i can just reference to some existing substrings, eg. thing.com will be just a string created with let substr_offset = builder.create_string("thing.com") and "some.thing.com" will be stored as [builder.create_string("some."), substr_offset].
However it seems referencing has the costs, so probably there is no benefit of referencing is the string is too short (less than offset variable size). Is it correct? Is offset type just usize? What are better alternatives for prefix/postfix strings representations with FlatBuffers?
PS. BTW what is string array instead of just string cost? Is it just one more offset cost?

Both strings and vectors are addressed over a 32-bit offset to them, and also have a 32-bit size field prefixed. So:
"some.thing.com" 14 chars + 1 terminator + 4 size bytes == 19.
Or:
"thing.com" 9 chars + 1 terminator + 4 size bytes == 14.
"some." 5 chars + 1 terminator + 4 size bytes == 10.
vector of 2 strings: 2x4 bytes of offsets + 4 size bytes = 12.
total: 36
of those 36, 14 are shared, leaving 22 bytes of unique data which is larger than the original. So the shared string needs to be 13 bytes or larger for this technique to be worth it, assuming it is shared often.
For details: https://google.github.io/flatbuffers/flatbuffers_internals.html

The offset seems to be uint32 (4 bytes, not usize) according to the doc.

Related

hpack encoding integer significance

After reading this, https://httpwg.org/specs/rfc7541.html#integer.representation
I am confused about quite a few things, although I seem to have the overall gist of the idea.
For one, What are the 'prefixes' exactly/what is their purpose?
For two:
C.1.1. Example 1: Encoding 10 Using a 5-Bit Prefix
The value 10 is to be encoded with a 5-bit prefix.
10 is less than 31 (2^5 - 1) and is represented using the 5-bit prefix.
0 1 2 3 4 5 6 7
+---+---+---+---+---+---+---+---+
| X | X | X | 0 | 1 | 0 | 1 | 0 | 10 stored on 5 bits
+---+---+---+---+---+---+---+---+
What are the leading Xs? What is the starting 0 for?
>>> bin(10)
'0b1010'
>>>
Typing this in the python IDE, you see almost the same output... Why does it differ?
This is when the number fits within the number of prefix bits though, making it seemingly simple.
C.1.2. Example 2: Encoding 1337 Using a 5-Bit Prefix
The value I=1337 is to be encoded with a 5-bit prefix.
1337 is greater than 31 (25 - 1).
The 5-bit prefix is filled with its max value (31).
I = 1337 - (25 - 1) = 1306.
I (1306) is greater than or equal to 128, so the while loop body executes:
I % 128 == 26
26 + 128 == 154
154 is encoded in 8 bits as: 10011010
I is set to 10 (1306 / 128 == 10)
I is no longer greater than or equal to 128, so the while loop terminates.
I, now 10, is encoded in 8 bits as: 00001010.
The process ends.
0 1 2 3 4 5 6 7
+---+---+---+---+---+---+---+---+
| X | X | X | 1 | 1 | 1 | 1 | 1 | Prefix = 31, I = 1306
| 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1306>=128, encode(154), I=1306/128
| 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 10<128, encode(10), done
+---+---+---+---+---+---+---+---+
The octet-like diagram shows three different numbers being produced... Since the numbers are produced throughout the loop, how do you replicate this octet-like diagram within an integer? What is the actual final result? The diagram or "I" being 10, or 00001010.
def f(a, b):
if a < 2**b - 1:
print(a)
else:
c = 2**b - 1
remain = a - c
print(c)
if remain >= 128:
while 1:
e = remain % 128
g = e + 128
remain = remain / 128
if remain >= 128:
continue
else:
print(remain)
c+=int(remain)
print(c)
break
As im trying to figure this out, I wrote a quick python implementation of it, It seems that i am left with a few useless variables, one being g which in the documentation is the 26 + 128 == 154.
Lastly, where does 128 come from? I can't find any relation between the numbers besides the fact 2 raised to the 7th power is 128, but why is that significant? Is this because the first bit is reserved as a continuation flag? and an octet contains 8 bits so 8 - 1 = 7?
For one, What are the 'prefixes' exactly/what is their purpose?
Integers are used in a few places in HPACK messages and often they have leading bits that cannot be used to for the actual integer. Therefore, there will often be a few leading digits that will be unavailable to use for the integer itself. They are represented by the X. For the purposes of this calculation it doesn't make what those Xs are: could be 000, or 111, or 010 or...etc. Also, there will not always be 3 Xs - that is just an example. There could only be one leading X, or two, or four...etc.
For example, to look up a previous HPACK decoded header, we use 6.1. Indexed Header Field Representation which starts with a leading 1, followed by the table index value. Therefore that 1 is the X in the previous example. We have 7-bits (instead of only 5-bits in the original example in your question). If the table index value is 127 or less we can represent it using those 7-bits. If it's >= 127 then we need to do some extra work (we'll come back to this).
If it's a new value we want to add to the table (to reuse in future requests), but we already have that header name in the table (so it's just a new value for that name we want as a new entry) then we use 6.2.1. Literal Header Field with Incremental Indexing. This has 2 bits at the beginning (01 - which are the Xs), and we only have 6-bits this time to represent the index of the name we want to reuse. So in this case there are two Xs.
So don't worry about there being 3 Xs - that's just an example. In the above examples there was one X (as first bit had to be 1), and two Xs (as first two bits had to be 01) respectively. The Integer Representation section is telling you how to handle any prefixed integer, whether prefixed by 1, 2, 3... etc unusable "X" bits.
What are the leading Xs? What is the starting 0 for?
The leading Xs are discussed above. The starting 0 is just because, in this example we have 5-bits to represent the integers and only need 4-bits. So we pad it with 0. If the value to encode was 20 it would be 10100. If the value was 40, we couldn't fit it in 5-bits so need to do something else.
Typing this in the python IDE, you see almost the same output... Why does it differ?
Python uses 0b to show it's a binary number. It doesn't bother showing any leading zeros. So 0b1010 is the same as 0b01010 and also the same as 0b00001010.
This is when the number fits within the number of prefix bits though, making it seemingly simple.
Exactly. If you need more than the number of bits you have, you don't have space for it. You can't just use more bits as HPACK will not know whether you are intending to use more bits (so should look at next byte) or if it's just a straight number (so only look at this one byte). It needs a signal to know that. That signal is using all 1s.
So to encode 40 in 5 bits, we need to use 11111 to say "it's not big enough", overflow to next byte. 11111 in binary is 31, so we know it's bigger than that, so we'll not waste that, and instead use it, and subtract it from the 40 to give 9 left to encode in the next byte. A new additional byte gives us 8 new bits to play with (well actually only 7 as we'll soon discover, as the first bit is used to signal a further overflow). This is enough so we can use 00001001 to encode our 9. So our complex number is represented in two bytes: XXX11111 and 00001001.
If we want to encode a value bigger than can fix in the first prefixed bit, AND the left over is bigger than 127 that would fit into the available 7 bits of the second byte, then we can't use this overflow mechanism using two bytes. Instead we use another "overflow, overflow" mechanism using three bytes:
For this "overflow, overflow" mechanism, we set the first byte bits to 1s as usual for an overflow (XXX11111) and then set the first bit of the second byte to 1. This leaves 7 bits available to encode the value, plus the next 8 bits in the third byte we're going to have to use (actually only 7 bits of the third byte, because again it uses the first bit to indicate another overflow).
There's various ways they could go have gone about this using the second and third bytes. What they decided to do was encode this as two numbers: the 128 mod, and the 128 multiplier.
1337 = 31 + (128 * 10) + 26
So that means the frist byte is set to 31 as per pervious example, the second byte is set to 26 (which is 11010) plus the leading 1 to show we're using the overflow overflow method (so 100011010), and the third byte is set to 10 (or 00001010).
So 1337 is encoded in three bytes: XXX11111 100011010 00001010 (including setting X to whatever those values were).
Using 128 mod and multiplier is quite efficient and means this large number (and in fact any number up to 16,383) can be represented in three bytes which is, not uncoincidentally, also the max integer that can be represented in 7 + 7 = 14 bits). But it does take a bit of getting your head around!
If it's bigger than 16,383 then we need to do another round of overflow in a similar manner.
All this seems horrendously complex but is actually relatively simply, and efficiently, coded up. Computers can do this pretty easily and quickly.
It seems that i am left with a few useless variables, one being g
You are not print this value in the if statement. Only the left over value in the else. You need to print both.
which in the documentation is the 26 + 128 == 154.
Lastly, where does 128 come from? I can't find any relation between the numbers besides the fact 2 raised to the 7th power is 128, but why is that significant? Is this because the first bit is reserved as a continuation flag? and an octet contains 8 bits so 8 - 1 = 7?
Exactly, it's because the first bit (value 128) needs to be set as per explanation above, to show we are continuing/overflowing into needing a third byte.

Direct mapped cache example

i am really confused on the topic Direct Mapped Cache i've been looking around for an example with a good explanation and it's making me more confused then ever.
For example: I have
2048 byte memory
64 byte big cache
8 byte cache lines
with direct mapped cache how do i determine the 'LINE' 'TAG' and "Byte offset'?
i believe that the total number of addressing bits is 11 bits because 2048 = 2^11
2048/64 = 2^5 = 32 blocks (0 to 31) (5bits needed) (tag)
64/8 = 8 = 2^3 = 3 bits for the index
8 byte cache lines = 2^3 which means i need 3 bits for the byte offset
so the addres would be like this: 5 for the tag, 3 for the index and 3 for the byte offset
Do i have this figured out correctly?
Do i figured out correctly? YES
Explanation
1) Main memmory size is 2048 bytes = 211. So you need 11 bits to address a byte (If your word size is 1 byte) [word = smallest individual unit that will be accessed with the address]
2) You can calculating tag bits in direct mapping by doing (main memmory size / cash size). But i will explain a little more about tag bits.
Here the size of a cashe line( which is always same as size of a main memmory block) is 8 bytes. which is 23 bytes. So you need 3 bits to represent a byte within a cashe line. Now you have 8 bits (11 - 3) are remaining in the address.
Now the total number of lines present in the cache is (cashe size / line size) = 26 / 23 = 23
So, you have 3 bits to represent the line in which the your required byte is present.
The number of remaining bits now are 5 (8 - 3).
These 5 bits can be used to represent a tag. :)
3) 3 bit for index. If you were trying to label the number of bits needed to represent a line as index. Yes you are right.
4) 3 bits will be used to access a byte withing a cache line. (8 = 23)
So,
11 bits total address length = 5 tag bits + 3 bits to represent a line + 3 bits to represent a byte(word) withing a line
Hope there is no confusion now.

Calculating the total data+overhead of a set associative cache

This is a question from a Computer Architecture exam and I don't understand how to get to the correct answer.
Here is the question:
This question deals with main and cache memory only.
Address size: 32 bits
Block size: 128 items
Item size: 8 bits
Cache Layout: 6 way set associative
Cache Size: 192 KB (data only)
Write policy: Write Back
What is the total number of cache bits?
In order to get the number of tag bits, I find that 7 bits of the address are used for byte offset (0-127) and 8 bits are used for the block number (0-250) (250 = 192000/128/6), therefore 17 bits of the address are left for the tag.
To find the total number of bits in the cache, I would take (valid bit + tag size + bits per block) * number of blocks per set * number of sets = (1 + 17 + 1024) * 250 * 6 = 1,536,000. This is not the correct answer though.
The correct answer is 1,602,048 total bits in the cache and part of the answer is that there are 17 tag bits. After trying to reverse engineer the answer, I found that 1,602,048 = 1043 * 256 * 6 but I don't know if that is relevant to the solution because I don't know why those numbers would be used.
I'd like if someone could explain what I did wrong in my calculation to get a different answer.

Questions about websocket framing

According to the RFC 6455 specification about websocket's.
Data frame structure is follows:
frame-fin ; 1 bit in length
frame-rsv1 ; 1 bit in length
frame-rsv2 ; 1 bit in length
frame-rsv3 ; 1 bit in length
frame-opcode ; 4 bits in length
frame-masked ; 1 bit in length
frame-payload-length ; either 7, 7+16,
; or 7+64 bits in
; length
[ frame-masking-key ] ; 32 bits in length
frame-payload-data ; n*8 bits in
; length, where
; n >= 0
So the minimum length of byte array to hold a frame would be 224 bytes (56 bits)? As I read on internet to represent a bit in byte array we need 4 bytes (1000).
How do I mask data? And what data should I mask? Only frame-payload-data or all the frame except the mask key?
The frame-masking-key field is only present when the frame is masked, which is only done for frames sent by a client to a server. And the frame-payload-data is optional; a frame may be empty, containing no data. Therefore the minimum length of a frame in the client-to-server direction is (1+1+1+1+4+1+7+32)=48 bits or 6 bytes, and the minimum length of a frame in the server-to-client direction is (1+1+1+1+4+1+7)=16 bits or 2 bytes.
Those would be frames that carry no payload. Obviously frames that carry payload data will require additional space.
As I read on internet to represent a bit in byte array we need 4 bytes
(1000).
Umm, no, each byte holds 8 bits. It might be convenient within a program to use larger data units to represent bit values, but that is completely independent of the format that is used in the actual frame.
How do I mask data? And what data should I mask? Only frame-payload-data
or all the frame except the mask key?
You mask by XOR-ing the frame-masking-key over the frame-payload-data. This is described in section 5.3 of RFC 6455.

How to use urandom to generate all possible base64 characters?

In a base64 digits you could save up to 6 bits (2**6 == 64).
Which means that you could fit 3 bytes in 4 base64 digits.
64**4 == 2**24
That's why:
0x000000 == 'AAAA'
0xFFFFFF == '////'
This means that a random string of 3 bytes is equivalent to a base64 string of 4 characters.
However if I am converting a number of bytes which is not a multiple of 3 in a base64 string, I will not be able to generate all the combination of the base64 string.
Let's take an example:
If I want a random 7 characters base64 string, I would need to generate 42 random bits (64**7 == 2**42).
If I am using urandom to get 5 random bytes I will get only 40 bits (5*8) and if ask for 6 I will get 48 bits (6*8).
Can I ask for 6 bytes and use a mask to short it down to 5 or will it break my random repartition?
One solution:
hex(0x123456789012 & 0xFFFFFFFFFF)
'0x3456789012'
Another one:
hex(0x123456789012 >> 8)
'0x1234567890'
What do you think?
base64 strings with 7 characters of length
is an encode of a file with 5 bytes ( 40 bits: no less, no more )
40%6 = 4
base64 needs to add 2 more bits, and then, with 42 bits, 42%6=0,
the encode is possible; but, beware:
" If I want a random 7 characters base64 string, I would need to
generate 42 random bits (64**7 == 2**42). "
the 2 additional bits are not random, and are constants; are zero, indeed.
The cardinal number of your key-space doesn't change: is 2**40 = 1099511627776, not (64**7 == 2**42).
(64**7 == 2**42) is the cardinal number of the key-space that contains all possible combinations of 64 chars with length of 7; but, with the last two bits fixed (to zero, in this case, but doesn't matter the value) you don't have all possible combinations.
6 random bytes (48 bits), or 42 random bits, increase your original key-space; you should use 5 random bytes (40 bits), and send it to base64

Resources