Decrypting REG_NONE value in Registry - windows

http://i.stack.imgur.com/xaP9s.jpg
Referring to the screenshot above as I'm not able to attach screenshot,
I want to convert the Filesize value which is in Hex to a String i.e. human readable format
The actual decimal value is 5.85 MB
While converting, I am not getting the actual value i.e. 5.85
Can any one suggest how do I convert the values.
I have a set of these hex values and want to convert them into a human readable format.

Each pair of hexadecimal numbers represents a byte, while the lowest value bytes are placed to the left:
0x00 -> 0
0xbb -> 187
0x5d -> 93
0*256^0 + 187*256^1 + 93*256^2 + 0*256^3 + 0*256^4 + 0*256^5 + 0*256^6 + 0*256^7
= 6142720
6142720 / 1024^2
= 5.85815
This storage format is called little-endian: https://en.wikipedia.org/wiki/Little-endian#Little-endian

Related

UTF-8 value of a character in ColdFusion?

In ColdFusion I can determine the ASCII value of character by using asc()
How do I determine the UTF-8 value of a character?
<cfscript>
x = "漢"; // 3 bytes
// bytes of unicode character, a.k.a. String.getBytes("UTF-8")
bytes = charsetDecode(x, "UTF-8");
writeDump(bytes); // -26-68-94
// convert the 3 bytes to Hex
hex = binaryEncode(bytes, "HEX");
writeDump(hex); // E6BCA2
// convert the Hex to Dec
dec = inputBaseN(hex, 16);
writeDump(dec); // 15121570
// asc() uses the UCS-2 representation: 漢 = Hex 6F22 = Dec 28450
asc = asc(x);
writeDump(asc); // 28450
</cfscript>
USC-2 is fixed to 2 bytes, so it cannot support all unicode characters (as there can be as much as 4 bytes per character). But what are you actually trying to achieve here?
Note: If you run this example and get more than 3 bytes returned, make sure CF picks up the file as UTF-8 (with BOM).

String of bits to byte. Value out of range - converting error java

I need to convert the set of bits that are represented via String. My String is multiple of 8, so I can divide it by 8 and get sub-strings with 8 bits inside. Then I have to convert these sub-strings to byte and print it in HEX. For example:
String seq = "0100000010000110";
The seq is much longer, but this is not the topic. Below you can see two sub-strings from the seq. And with one of them I have trouble, why?
String s_ok = "01000000"; //this value is OK to convert
String s_error = "10000110"; //this is not OK to convert but in HEX it is 86 in DEC 134
byte nByte = Byte.parseByte(s_ok, 2);
System.out.println(nByte);
try {
byte bByte = Byte.parseByte(s_error, 2);
System.out.println(bByte);
} catch (Exception e) {
System.out.println(e); //Value out of range. Value:"10000110" Radix:2
}
int in=Integer.parseInt(s_error, 2);
System.out.println("s_error set of bits in DEC - "+in + " and now in HEX - "+Integer.toHexString((byte)in)); //s_error set of bits in DEC - 134 and now in HEX - ffffff86
I can't understand why there is an error, for calculator it is not a problem to convert 10000110. So, I tried Integer and there are ffffff86 instead of simple 86.
Please help with: Why? and how to avoid the issue.
Well, I found how to avoid ffffff:
System.out.println("s_error set of bits in DEC - "+in + " and now in HEX - "+Integer.toHexString((byte)in & 0xFF));
0xFF was added. Bad thing is - I still don't know from where those ffffff came and I it is not clear for, what I've done. Is it some kind of byte multiplication or is it masking? I'm lost.

Converting nginx uuid from hex to Base64: how is byte-order involved?

Nginx can be configured to generate a uuid suitable for client identification. Upon receiving a request from a new client, it appends a uuid in two forms before forwarding the request upstream to the origin server(s):
cookie with uuid in Base64 (e.g. CgIGR1ZfUkeEXQ2YAwMZAg==)
header with uuid in hexadecimal (e.g. 4706020A47525F56980D5D8402190303)
I want to convert a hexadecimal representation to the Base64 equivalent. I have a working solution in Ruby, but I don't fully grasp the underlying mechanics, especially the switching of byte-orders:
hex_str = "4706020A47525F56980D5D8402190303"
Treating hex_str as a sequence of high-nibble (most significant 4 bits first) binary data, produce the (ASCII-encoded) string representation:
binary_seq = [hex_str].pack("H*")
# 47 (71 decimal) -> "G"
# 06 (6 decimal) -> "\x06" (non-printable)
# 02 (2 decimal) -> "\x02" (non-printable)
# 0A (10 decimal) -> "\n"
# ...
#=> "G\x06\x02\nGR_V\x98\r]\x84\x02\x19\x03\x03"
Map binary_seq to an array of 32-bit little-endian unsigned integers. Each 4 characters (4 bytes = 32 bits) maps to an integer:
data = binary_seq.unpack("VVVV")
# "G\x06\x02\n" -> 167904839 (?)
# "GR_V" -> 1449087559 (?)
# "\x98\r]\x84" -> 2220690840 (?)
# "\x02\x19\x03\x03" -> 50534658 (?)
#=> [167904839, 1449087559, 2220690840, 50534658]
Treating data as an array of 32-bit big-endian unsigned integers, produce the (ASCII-encoded) string representation:
network_seq = data.pack("NNNN")
# 167904839 -> "\n\x02\x06G" (?)
# 1449087559 -> "V_RG" (?)
# 2220690840 -> "\x84]\r\x98" (?)
# 50534658 -> "\x03\x03\x19\x02" (?)
#=> "\n\x02\x06GV_RG\x84]\r\x98\x03\x03\x19\x02"
Encode network_seq in Base64 string:
Base64.encode64(network_seq).strip
#=> "CgIGR1ZfUkeEXQ2YAwMZAg=="
My rough understanding is that big-endian is the standard byte-order for network communications, while little-endian is more common on host machines. Why nginx provides two forms that require switching byte order to convert I'm not sure.
I also don't understand how the .unpack("VVVV") and .pack("NNNN") steps work. I can see that G\x06\x02\n becomes \n\x02\x06G, but I don't understand the steps that get there. For example, focusing on the first 8 digits of hex_str, why do .pack(H*) and .unpack("VVVV") produce:
"4706020A" -> "G\x06\x02\n" -> 167904839
whereas converting directly to base-10 produces:
"4706020A".to_i(16) -> 1191576074
? The fact that I'm asking this shows I need clarification on what exactly is going on in all these conversions :)

Strange Value for MF_MT_AM_FORMAT_TYPE and MF_MT_H264_MAX_MB_PER_SEC

I am trying enumerate Video Capture format for Logitech camera.I am using this.
I got following entries
MF_MT_FRAME_SIZE 640 x 480
MF_MT_AVG_BITRATE 6619136
MF_MT_COMPRESSED 1
MF_MT_H264_MAX_MB_PER_SEC 245,0,245,0,0,0,0,0,0,0
MF_MT_MAJOR_TYPE MFMediaType_Video
MF_MT_H264_SUPPORTED_USAGES 3
MF_MT_H264_SUPPORTED_RATE_CONTROL_MODES 15
MF_MT_AM_FORMAT_TYPE {2017BE05-6629-4248-AAED-7E1A47BC9B9C}
MF_MT_H264_SUPPORTED_SYNC_FRAME_TYPES 2
MF_MT_MPEG2_LEVEL 40
MF_MT_H264_SIMULCAST_SUPPORT 0
MF_MT_MPEG2_PROFILE 256
MF_MT_FIXED_SIZE_SAMPLES 0
MF_MT_H264_CAPABILITIES 33
MF_MT_FRAME_RATE 30 x 1
MF_MT_PIXEL_ASPECT_RATIO 1 x 1
MF_MT_H264_SUPPORTED_SLICE_MODES 14
MF_MT_ALL_SAMPLES_INDEPENDENT 0
MF_MT_FRAME_RATE_RANGE_MIN 30 x 1
MF_MT_INTERLACE_MODE 2
MF_MT_FRAME_RATE_RANGE_MAX 30 x 1
MF_MT_H264_RESOLUTION_SCALING 3
MF_MT_H264_MAX_CODEC_CONFIG_DELAY 1
MF_MT_SUBTYPE MFVideoFormat_H264_ES
MF_MT_H264_SVC_CAPABILITIES 1
Note: I have modified the function in Media Type Debugging Code as follows.when i run the program i got cElement = 10 and i have put pElemet in for loop to get this value MF_MT_H264_MAX_MB_PER_SEC 245,0,245,0,0,0,0,0,0,0
case VT_VECTOR | VT_UI1:
{
//DBGMSG(L"<<byte array Value>>");
// Item count for the array.
UINT cElement = var.caub.cElems/sizeof(UINT);
// Array pointer.
UINT* pElement = (UINT*)(var.caub.pElems);
for( int i = 0; i < cElement;i++)
DBGMSG(L"%d,", pElement[i]);
}
I am not able to find out what these value signifies
MF_MT_AM_FORMAT_TYPE {2017BE05-6629-4248-AAED-7E1A47BC9B9C}
MF_MT_H264_MAX_MB_PER_SEC 245,0,245,0,0,0,0,0,0,0
MSDN explains value of MF_MT_H264_MAX_MB_PER_SEC attribute:
Data type
UINT32[] stored as UINT8[]
Hence, array of bytes is the expected formatting.
The value of the attribute is an array of UINT32 values, which correspond to the following fields in the UVC 1.5 H.264 video format descriptor.
You have:
dwMaxMBperSecOneResolutionNoScalability
Specifies the maximum macroblock processing rate allowed for
non-scalable Advanced Video Coding (AVC) streams, summing up across
all layers when all layers have the same resolution.
16056565
dwMaxMBperSecTwoResolutionsNoScalability
Specifies the maximum macroblock processing rate allowed for
non-scalable AVC streams, summing up across all layers when all layers
consist of two different resolutions.
0
Media Type GUID "2017be05-6629-4248-aaed-7e1a47bc9b9c" means FORMAT_UVCH264Video
You can then cast the pbFormat struct to KS_H264VIDEOINFO*

MATLAB: how to display UTF-8-encoded text read from file?

The gist of my question is this:
How can I display Unicode characters in Matlab's GUI (OS X) so that they are properly rendered?
Details:
I have a table of strings stored in a file, and some of these strings contain UTF-8-encoded Unicode characters. I have tried many different ways (too many to list here) to display the contents of this file in the MATLAB GUI, without success. For example:
>> fid = fopen('/Users/kj/mytable.txt', 'r', 'n', 'UTF-8');
>> [x, x, x, enc] = fopen(fid); enc
enc =
UTF-8
>> tbl = textscan(fid, '%s', 35, 'delimiter', ',');
>> tbl{1}{1}
ans =
ÎÎÎÎÎΠΣΦΩαβγδεζηθικλμνξÏÏÏÏÏÏÏÏÏÏ
>>
As it happens, if I paste the string directly into the MATLAB GUI, the pasted string is displayed properly, which shows that the GUI is not fundamentally incapable of displaying these characters, but once MATLAB reads it in, it longer displays it correctly. For example:
>> pasted = 'ΓΔΘΛΞΠΣΦΩαβγδεζηθικλμνξπρςστυφχψω'
pasted =
>>
Thanks!
I present below my findings after doing some digging... Consider these test files:
a.txt
ΓΔΘΛΞΠΣΦΩαβγδεζηθικλμνξπρςστυφχψω
b.txt
தமிழ்
First, we read files:
%# open file in binary mode, and read a list of bytes
fid = fopen('a.txt', 'rb');
b = fread(fid, '*uint8')'; %'# read bytes
fclose(fid);
%# decode as unicode string
str = native2unicode(b,'UTF-8');
If you try to print the string, you get a bunch of nonsense:
>> str
str =
Nonetheless, str does hold the correct string. We can check the Unicode code of each character, which are as you can see outside the ASCII range (last two are the non-printable CR-LF line endings):
>> double(str)
ans =
Columns 1 through 13
915 916 920 923 926 928 931 934 937 945 946 947 948
Columns 14 through 26
949 950 951 952 953 954 955 956 957 958 960 961 962
Columns 27 through 35
963 964 965 966 967 968 969 13 10
Unfortunately, MATLAB seems unable to display this Unicode string in a GUI on its own. For example, all these fail:
figure
text(0.1, 0.5, str, 'FontName','Arial Unicode MS')
title(str)
xlabel(str)
One trick I found is to use the embedded Java capability:
%# Java Swing
label = javax.swing.JLabel();
label.setFont( java.awt.Font('Arial Unicode MS',java.awt.Font.PLAIN, 30) );
label.setText(str);
f = javax.swing.JFrame('frame');
f.getContentPane().add(label);
f.pack();
f.setVisible(true);
As I was preparing to write the above, I found an alternative solution. We can use the DefaultCharacterSet undocumented feature and set the charset to UTF-8 (on my machine, it is ISO-8859-1 by default):
feature('DefaultCharacterSet','UTF-8');
Now with a proper font (you can change the font used in the Command Window from Preferences > Font), we can print the string in the prompt (note that DISP is still incapable of printing Unicode):
>> str
str =
ΓΔΘΛΞΠΣΦΩαβγδεζηθικλμνξπρςστυφχψω
>> disp(str)
ΓΔΘΛΞΠΣΦΩαβγδεζηθικλμνξπÏςστυφχψω
And to display it in a GUI, UICONTROL should work (under the hood, I think it is really a Java Swing component):
uicontrol('Style','text', 'String',str, ...
'Units','normalized', 'Position',[0 0 1 1], ...
'FontName','Arial Unicode MS', 'FontSize',30)
Unfortunately, TEXT, TITLE, XLABEL, etc.. are still showing garbage:
As a side note: It is difficult to work with m-file sources containing Unicode characters in the MATLAB editor. I was using Notepad++, with files encoded as UTF-8 without BOM.

Resources