golang tabwriter printing junk and not formatting properly - go

I'm trying to print a help message using tabwriter:
func printHelp() {
writer := tabwriter.NewWriter(os.Stdout, 100, 8, 1, ' ', 0)
fmt.Println(writer, "uni outputs Unicode information for characters.")
fmt.Println(writer, "USAGE:")
fmt.Println(writer, " uni <input>\tOutputs Unicode info for the input.\t")
fmt.Println(writer, "\tProcesses U+xxxx sequences into the appropriate characters and treats control characters as control.\t")
fmt.Println(writer, " uni raw <input>\tOutputs Unicode info on the raw unporocessed input.\t")
fmt.Println(writer, " uni decomp <input\tOutputs the input with all characters decomposed.\t")
fmt.Println(writer, " uni comp <input>\tOutputs the input with all characters composed.\t")
fmt.Println(writer, " uni short <input>\tOutputs basic info about the input, one character per line.\t")
fmt.Println(writer, " uni rawshort <input>\tOutputs basic info about the raw unprocessed input, one character per line.\t")
fmt.Println(writer, " uni help\tPrints this help message.\t")
writer.Flush()
}
However the output both contains a significant amount of internal junk that shouldn't be printed and isn't aligning the columns correctly.
Here's what I get:
&{0xc000006018 100 8 1 [32 32 32 32 32 32 32 32] 0 [] 0 {0 0 false} 0 [[]] []} uni outputs Unicode information for characters.
&{0xc000006018 100 8 1 [32 32 32 32 32 32 32 32] 0 [] 0 {0 0 false} 0 [[]] []} USAGE:
&{0xc000006018 100 8 1 [32 32 32 32 32 32 32 32] 0 [] 0 {0 0 false} 0 [[]] []} uni <input> Outputs Unicode info for the input.
&{0xc000006018 100 8 1 [32 32 32 32 32 32 32 32] 0 [] 0 {0 0 false} 0 [[]] []} Processes U+xxxx sequences into the appropriate characters and treats control characters as control.
&{0xc000006018 100 8 1 [32 32 32 32 32 32 32 32] 0 [] 0 {0 0 false} 0 [[]] []} uni raw <input> Outputs Unicode info on the raw unporocessed input.
&{0xc000006018 100 8 1 [32 32 32 32 32 32 32 32] 0 [] 0 {0 0 false} 0 [[]] []} uni decomp <input Outputs the input with all characters decomposed.
&{0xc000006018 100 8 1 [32 32 32 32 32 32 32 32] 0 [] 0 {0 0 false} 0 [[]] []} uni comp <input> Outputs the input with all characters composed.
&{0xc000006018 100 8 1 [32 32 32 32 32 32 32 32] 0 [] 0 {0 0 false} 0 [[]] []} uni short <input> Outputs basic info about the input, one character per line.
&{0xc000006018 100 8 1 [32 32 32 32 32 32 32 32] 0 [] 0 {0 0 false} 0 [[]] []} uni rawshort <input> Outputs basic info about the raw unprocessed input, one character per line.
&{0xc000006018 100 8 1 [32 32 32 32 32 32 32 32] 0 [] 0 {0 0 false} 0 [[]] []} uni help Prints this help message.
and here's roughly what I want
uni outputs Unicode information for characters.
USAGE:
uni <input> Outputs Unicode info for the input.
Processes U+xxxx sequences into the appropriate characters and treats control characters as control.
uni raw <input> Outputs Unicdoe info on the raw unporocessed input.
uni decomp <input Outputs the input with all characters decomposed.
uni comp <input> Outputs the input with all characters composed.
uni short <input> Outputs basic info about the input, one character per line.
uni rawshort <input> Outputs basic info about the raw unprocessed input, one character per line.
uni help Prints this help message.
I've tried all changing the minimum cell width to all sorts of values, from 0 to multiple hundreds, and it changes the alignment but not to the point of working correctly. I've tried tab vs space separators and Right Alignment vs default (left) alignment with no differences.

To write to an io.Writer use fmt.Fprintln (or fmt.Fprintf) instead of fmt.Println:
// fmt.Println(writer, "uni outputs Unicode information for characters.")
fmt.Fprintln(writer, "uni outputs Unicode information for characters.")
fmt.Println will do its best to just render each argument - hence why you are seeing the tabwriter's internal reflected state.

Related

trying to understand how checksum is calculated

I am looking at this page and I am not sure how the author is calculating the checksum. I would contact the author directly, but don't have his email address (its not listed in github).
This is a simple example of a packet with no variables. The author calculates the checksum to be 120 (I assume this is hex as all his other values are in hex). The sum of all the bytes is 0xBA hex or 186 base(10). His notes say "Checksum Low Bit, This bit is checksum of 1-5 bits (MOD 256, if necessary)" but I am not getting what he is saying and I can't figure out how to get to his answer.
Get Version / Return Name
Byte 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Request 16 2 80 20 2 120 16 3
Byte Sample hex Definition
hex (B10)
==== ==== ===== =============================
1 0x16 (22) Preamble 1
2 0x02 (2) Preamble 2
3 0x80 (128) Destination = Chlorinator
4 0x20 (32) Command = Get Name
5 0x02 (2) Not sure. Intellitouch uses 2. Aquarite uses 0. Any of them seem to work.
6 120 Checksum Low Bit, This bit is checksum of 1-5 bits (MOD 256, if necessary)
7 0x16 (22) Post-amble 1
8 0x3 (3) Post-amble 2
Any suggestions would be most appreciated!
Turns out that the commentors were 100% correct: the numbers were express in decimal, not hex as I assumed.

What is << stand for in ruby with integer

What is use of << I understand in array it is used for push but here I am not clear what is purpose of this in following code. Where it is being used integer.
def array_pack(a)
a.reverse.reduce(0) { |x, b| (x << 8) + b }
end
array_pack([24, 85, 0]) # will print 21784
like if I x is 8 and I write 8 << 8 it gives me response of 2048 so is it converting in bytes? or what exact is its purpose.
It is a Bitwise LEFT shift operator.
Definition:
The LEFT SHIFT operator << shifts each bit of a number to the left by n positions.
Example:
If you do 7 << 2 = 28
7 in Base 2: 0000 0111
128 64 32 16 8 4 2 1
----------------------
7: 0 0 0 0 0 1 1 1
Now shift each bit to the left by 2 positions
128 64 32 16 8 4 2 1
----------------------
28: 0 0 0 1 1 1 0 0
Why?
Bitwise operators are widely used for low-level programming on embedded systems to apply a mask (in this case to integer)
Benefits
See this SO answer: link
View Source for more details: link
As the documentation says, Integer:<< - Returns the integer shifted left "X" positions, or right if "X" is negative. In your scenario is shifts 8 positions to the left.
Here is how it works:
8.to(2) => "1000"
Now let's shift "1000" 8 positions to the left
(8 << 8).to_s(2) => "100000000000"
If you count the 0 above you will see it added 8 after "1000".
Now, let's see how it returns 2048
"100000000000".to_i(2) => 2048

Why is huffman encoded text bigger than actual text?

I am trying to understand how Huffman coding works and it is supposed to compress data to take less memory than actual text but when I encode for example
"Text to be encoded"
which has 18 characters the result I get is
"100100110100101110101011111000001110011011110010101100011"
Am I supposed to divide those result bits by 8 since character has 8 bits?
You should compare the same units (bits as in the after the compession or characters as in the text before), e.g.
before: "Text to be encoded" == 18 * 8 bits = 144 bits
== 18 * 7 bits = 126 bits (in case of 7-bit characters)
after: 100100110100101110101011111000001110011011110010101100011 = 57 bits
so you have 144 (or 126) bits before and 57 bits after the compression. Or
before: "Text to be encoded" == 18 characters
after: 10010011
01001011
10101011
11100000
11100110
11110010
10110001
00000001 /* the last chunk is padded */ == 8 characters
so you have 18 ascii characters before and only 8 one byte characters after the compression. If characters are supposed to be 7-bit (0..127 range Ascii table) we have 9 characters after the compression:
after: 1001001 'I'
1010010 'R'
1110101 'u'
0111110 '>'
0000111 '\0x07'
0011011 '\0x1B'
1100101 'e'
0110001 'l'
0000001 '\0x01'

Converting bits in hexadecimal to bytes

I am trying to understand
256 bits in hexadecimal is 32 bytes, or 64 characters in the range 0-9 or A-F
How can a 32 bytes string be 64 characters in the range 0-9 or A-F?
What does 32 bytes mean?
I would assume that bits mean a digit 0 or 1, so 256 bits would be 256 digits of either 0 or 1.
I know that 1 byte equals 8 bits, so is 32 bytes a 32 digits of either 0, 1, 2, 3, 4, 5, 6, or 7 (i.e. 8 different values)?
I do know a little about different bases (e.g. that binary has 0 and 1, decimal has 0-9, hexadecimal has 0-9 and A-F, etc.), but I still fail to understand why 256 bits in hexadecimal can be 32 bytes or 64 characters.
I know it's quite basic in computer science, so I have to read up on this, but can you give a brief explanation?
A single hexadecimal character represents 4 bits.
1 = 0001
2 = 0010
3 = 0011
4 = 0100
5 = 0101
6 = 0110
7 = 0111
8 = 1000
9 = 1001
A = 1010
B = 1011
C = 1100
D = 1101
E = 1110
F = 1111
Two hexadecimal characters can represent a byte (8 bits).
How can a 32 bytes string be 64 characters in the range 0-9 or A-F?
Keep in mind that the hexadecimal representation is an EXTERNAL depiction of the bit settings. If byte contains 01001010, was can say that it 4A in hex. The characters 4A are not stored in the byte. It's like in mathematics where we use the depictions "e" and "π" to represent numbers.
What does 32 bytes mean?
1 Byte = 8 bits. 32 bytes = 256 bits.

Recover sector in Mifare Classic 1k with overwritten permission bits

I have mistakenly overwritten sector 1 block 7 of one of my Mifare classic 1k tags. It was meant for testing and the 16 byte data that I wrote on block 7 is shown below:
0xaa 0xaa 0xaa 0xaa
0xbb 0xbb 0xbb 0xbb
0xcc 0xcc 0xcc 0xcc
0xdd 0xdd 0xdd 0xdd
If not mistaken, by doing so, my access keys and permission bits have become as following:
Key-A: 0xaa 0xaa 0xaa 0xaa 0xbb 0xbb
Key-B: 0xcc 0xcc 0xdd 0xdd 0xdd 0xdd
Permisssion Bits: --> 0xbb 0xbb 0xcc
I have tried to use Key-A and Key-B as shown above to read/write block 7 in sector 1. But I am no longer able to access (no read or write) any block in sector 1 anymore.
I know the keys to all other sectors (e.g. sector 0 and sectors 2-15) and able to access them.
Considering the situation, I would like to know if there is any way to reset sector 1 or block 7 to regain my access. Many thanks.
Update:
I have confirmed that both Key-A and Key-B as shown above are correct and I can authenticate to the card with both of them. Also, as per the Mifare Classic specification (screenshot), my access bits are as follows:
Byte 6 = 0xbb = 0b10111011
--------------------------
C2_3 C2_2 C2_1 C2_0 C1_3 C1_2 C1_1 C1_0
1 1 0 1 1 1 0 1
Byte 7 = 0xbb = 0b10111011
--------------------------
C1_3 C1_2 C1_1 C1_0 C3_3 C3_2 C3_1 C3_0
1 1 0 1 1 1 0 1
Now, considering the specification/screenshot, C1_3, C2_3 and C3_3 bits enable read/write access to sector-trailer. In my case, for block 7 (trailer for sector 7) they are all set to 1. Should I not have write access to this block then?
Once the Access Control bits are not configured correctly (for example, bits that are supposed to be each other's complement are not complementary, like in your case), the sector cannot be accessed anymore at all.
In the Mifare classic specification you linked says:
Remark: With each memory access the internal logic verifies the format
of the access conditions. If it detects a format violation the whole
sector is irreversibly blocked.
Your access bytes does not verify the format. In the folowing table ~ means inverted
Byte 6
--------------------------
~C2_3 ~C2_2 ~C2_1 ~C2_0 ~C1_3 ~C1_2 ~C1_1 ~C1_0
1 0 1 1 1 0 1 1
Byte 7
--------------------------
C1_3 C1_2 C1_1 C1_0 ~C3_3 ~C3_2 ~C3_1 ~C3_0
1 0 1 1 1 0 1 1
Byte 8
--------------------------
C3_3 C3_2 C3_1 C3_0 C2_3 C2_2 C2_1 C2_0
1 1 0 0 1 1 0 0
So, for instance, C2_3 = 1 and ~C2_3 = 1. They are not complementary. Format not verified, sector is irreversibily blocked.
In the same document there is a table (table 7) that shows that keyA can always be readed. Maybe this is the reason you can authenticate.

Resources