Why strings are stored in the following way in PE file - compilation

I opened a .exe file and I found a string "Premium" was stored in the following way
50 00 72 00 65 00 6D 00 69 00 75 00 6D 00
I just don't know why "00" is appended to each of characters and what's its usage.
Thanks,

It's probably a UTF-16 encoding of a Unicode string. Here's an example using Python:
>>> u"Premium".encode("utf16")
'\xff\xfeP\x00r\x00e\x00m\x00i\x00u\x00m\x00'
# ^ ^ ^ ^ ^ ^ ^
After the byte marker to indicate endianness, you can see alternating letters and null bytes.
\xff\xfe is the byte-order marker; it indicates that the low-order byte of each 16-bit value comes first. (If the high-order byte came first, the byte marker would be \xfe\xff; there's nothing particularly meaningful about which marker means which.)
Each character is then encoded as a 16-bit value. For many values, the UTF-16 encoding is simply the straightforward unsigned 16-bit representation of its Unicode code point. Specifically, 8-bit ASCII values simply use a null byte as the high-order byte, and its ASCII value as the low-order byte.

Related

character encoding - how utf-8 handles charactrers

So, I know that when we type characters, each character maps to a number in a character set and then, that number is transformed into a binary format so a computer can understand. They way that number is transformed into a binary format(how many bits gets allocated) depends on character encoding.
So, if I type L, It represents 76. Then 76 gets tranformed into 1 byte binary format because of let's say UTF-8.
Now, I've read the following somewhere:
The Devanagari character क, with code point 2325 (which is 915 in
hexadecimal notation), will be represented by two bytes when using the
UTF-16 encoding (09 15), three bytes with UTF-8 (E0 A4 95), or four
bytes with UTF-32 (00 00 09 15).
So, as you can see it says three bytes with UTF-8 (E0 A4 95). how are E0 A4 95 bytes ? I am asking because i have no idea where E0 A4 95 came from... Why do we need this ? if we know that code point is 2325, all we have to do is in order to use UTF-8, we know that utf-8 will need 3 bytes to transform 2325 into binary... Why do we need E0 A4 95 and what is it ?
E0 A4 95 is the 3-byte UTF-8 encoding of U+0915. In binary:
E 0 A 4 9 5 (hex)
11100000 10100100 10010101 (binary)
1110xxxx 10xxxxxx 10xxxxxx (3-byte UTF-8 encoding pattern)
0000 100100 010101 (data bits)
00001001 00010101 (regrouped data bits to 8-bit bytes)
0 9 1 5 (hex)
U+0915 (Unicode code point)
The first byte's binary pattern 1110xxxx is a lead byte indicating a 3-byte encoding and 4 bits of data. Follow on bytes start with 10xxxxxx and provide 6 more bits of data. There will be two following bytes after a 3-byte leading byte indicator.
For more information read the Wikipedia article on UTF-8 and the standard RFC-3629.

Hexdump: Convert between bytes and two-byte decimal

When I use hexdump on a file with no options, I get rows of hexadecimal bytes:
cf fa ed fe 07 00 00 01 03 00 00 80 02 00 00 00
When I used hexdump -d on the same file, that same data is shown in something called two-byte decimal groupings:
64207 65261 00007 00256 00003 32768 00002 00000
So what I'm trying to figure out here is how to convert between these two encodings. cf and fa in decimal are 207 and 250 respectively. How do those numbers get combined to make 64207?
Bonus question: What is the advantage of using these groupings? The octal display uses 16 groupings of three digits, why not use the same thing with the decimal display?
As commented by #georg.
0xfa * 256 + 0xcf == 0xfacf == 64207
The conversion exactly works like this.
So, if you see man hexdump:
-d, --two-bytes-decimal
Two-byte decimal display. Display the input offset in hexadecimal, followed by eight
space-separated, five-column, zero-filled, two-byte units of input data, in unsigned
decimal, per line.
So, for example:
00000f0 64207 65261 00007 00256 00003 32768 00002 00000
Here, 00000f0 is a hexadecimal offset.
Followed by two-byte units of input data, for eg.: 64207 in decimal (first 16 bits - i.e. two bytes of the file).
The conversion (in your case):
cf fa ----> two-byte unit (the byte ordering depends on your architecture).
fa * 256 + cf = facf ----> appropriately ----> 0xfacf (re-ording)
And dec of oxfacf is 64207.
Bonus question: It is a convention to display octal numbers using three digits (unlike hex and decimal), so it uses a triplet for each byte.

utf8 second byte lower bound in golang

I was recently going through the go source code of utf8 decoding.
Apparently when decoding utf8 bytes, when the first byte has the value 224
(0xE0) it maps to an accept range of [0xA0; 0xBF].
https://github.com/golang/go/blob/master/src/unicode/utf8/utf8.go#L81
https://github.com/golang/go/blob/master/src/unicode/utf8/utf8.go#L94
If I understand the utf8 spec (https://www.rfc-editor.org/rfc/rfc3629) correctly every continuation byte has the minimum value of 0x80 or 1000 0000. Why is the minimum value for opening byte with 0xE0 higher, i.e. 0xA0 instead of 0x80?
The reason is to prevent so-called overlong sequences. Quoting the RFC:
Implementations of the decoding algorithm above MUST protect against
decoding invalid sequences. For instance, a naive implementation may
decode the overlong UTF-8 sequence C0 80 into the character U+0000,
or the surrogate pair ED A1 8C ED BE B4 into U+233B4. Decoding
invalid sequences may have security consequences or cause other
problems.
[...]
A particularly subtle form of this attack can be carried out against
a parser which performs security-critical validity checks against the
UTF-8 encoded form of its input, but interprets certain illegal octet
sequences as characters. For example, a parser might prohibit the
NUL character when encoded as the single-octet sequence 00, but
erroneously allow the illegal two-octet sequence C0 80 and interpret
it as a NUL character. Another example might be a parser which
prohibits the octet sequence 2F 2E 2E 2F ("/../"), yet permits the
illegal octet sequence 2F C0 AE 2E 2F. This last exploit has
actually been used in a widespread virus attacking Web servers in
2001; thus, the security threat is very real.
Also note the syntax rules in section 4 which explicitly only allow characters A0-BF after E0:
UTF8-2 = %xC2-DF UTF8-tail
UTF8-3 = %xE0 %xA0-BF UTF8-tail / %xE1-EC 2( UTF8-tail ) /
%xED %x80-9F UTF8-tail / %xEE-EF 2( UTF8-tail )
UTF8-4 = %xF0 %x90-BF 2( UTF8-tail ) / %xF1-F3 3( UTF8-tail ) /
%xF4 %x80-8F 2( UTF8-tail )
If the first byte of an UTF-8 sequence is 0xe0, that means it's a 3-byte sequence representing / encoding a Unicode codepoint (because 0xe0 = 1110 0000b).
Wikipedia: UTF-8:
Number Bits for First Last Byte 1 Byte 2 Byte 3
of bytes code point code point code point
---------------------------------------------------------------------------
3 16 U+0800 U+FFFF 1110xxxx 10xxxxxx 10xxxxxx
The first Unicode codepoint that is encoded using 3-byte UTF-8 sequence is U+0800, so the codepoint is 0x0800, in binary: 0000 1000 0000 0000
If you try to insert these bits into the ones marked with x:
1110xxxx 10xxxxxx 10xxxxxx
11110000 10100000 10000000
As you can see, the second byte is 1010 0000 which is 0xa0. So a valid UTF-8 sequence that starts with 0xe0: its 2nd byte cannot be lower than 0xa0 (because even the lowest Unicode codepoint whose UTF-8 encoded sequence starts with 0xe0 has a 2nd byte of 0xa0).

Understanding .bmp file

I have a .bmp file
I sort of do understand and sort of do not understand. I understand that the first 14 bytes are my Bitmapfileheader. I furthermore do understand that my Bitmapinfoheader contains information about the bitmap as well and is about 40 bytes large (in version 3).
What I do not understand is, how the information is stored in there.
I have this image:
Why is all the colorinformation stored in "FF"? I know that the "00" are "Junk Bytes". What I do not understand is why there is everything in "FF"?!
Furthermore, I do not understand what type of "encoding" that is? 42 4D equals do "BM". What is that? How can I translate what I see there to colors or letters or numbers?!
What I can read in this case:
Bitmapfileheader:
First 2 bytes. BM if it is a .bmp file: 42 4D = BM (However 42 4D transforms to BM)
Next 4 Bytes: Size of the bitmap. BA 01 00 00. Dont know what size that should be.
Next 4 Bytes: Something reserved.
Next 4 Bytes: Offset (did not quite understand that)
Bitmapinfoheader
Next 4 Bytes: Size of the bitmapinfoheader. 6C 00 00 00 here.
Next 4 Bytes: Width of the .bmp. 0A 00 00 00. I know that that must be 10px since I created that file.
Next 4 Bytes: Height of the .bmp. 0A 00 00 00. I know that that must be 10px since I created that file.
Next 2 Bytes: Something from another file format.
Next two Bytes: Color depth. 18 00 00 00. I thought that can only by 1,2,4,8, 16, 24, 32?
The first 2 bytes of information that you see "42 4D" are what we call the magic numbers. They are the signature of the file, 42 4d is the hex notation of 01000010 01001101 in binary.
Every file has one, .jpg, .gif. You get it.
Here is an image that illustrate a BMP complete header of 54 bytes(24-bit BMP).
BMP Header
The total size of the BMP is calculated by the size of the header + BMP.width x BMP.height * 3 (1 byte for Red, 1 byte for Green, 1 byte for Blue - in the case of 8bits of information per channel) + padding(if it exists).
Junk bytes that you refer, is the padding, they are needed if the size of each scanline(row) is not a multiple of 4.
White in hex notation if ffffff, being the first two red, green and blue.
While in decimal notation each channel will have the value of 255, because 2^8(8 bits) -1 = 255.
Hope this clears a bit(non intended pun) for you.

Padding the message in SHA256

I am trying to understand SHA256. On the Wikipedia page it says:
append the bit '1' to the message
append k bits '0', where k is the minimum number >= 0 such that the resulting message
length (modulo 512 in bits) is 448.
append length of message (without the '1' bit or padding), in bits, as 64-bit big-endian
integer
(this will make the entire post-processed length a multiple of 512 bits)
So if my message is 01100001 01100010 01100011 I would first add a 1 to get
01100001 01100010 01100011 1
Then you would fill in 0s so that the total length is 448 mod 512:
01100001 01100010 01100011 10000000 0000 ... 0000
(So in this example, one would add 448 - 25 0s)
My question is: What does the last part mean? I would like to see an example.
It means the message length, padded to 64 bits, with the bytes appearing in order of significance. So if the message length is 37113, that's 90 f9 in hex; two bytes. There are two basic(*) ways to represent this as a 64-bit integer,
00 00 00 00 00 00 90 f9 # big endian
and
f9 90 00 00 00 00 00 00 # little endian
The former convention follows the way numbers are usually written out in decimal: one hundred and two is written 102, with the most significant part (the "big end") being written first, the least significant ("little end") last. The reason that this is specified explicitly is that both conventions are used in practice; internet protocols use big endian, Intel-compatible processors use little endian, so if they were decimal machines, they'd write one hundred and two as 201.
(*) Actually there are 8! = 40320 ways to represent a 64-bit integer if 8-bit bytes are the smallest units to be permuted, but two are in actual use.

Resources