Visual studio printing range within array - visual-studio

Blob is defined as follows:
unsigned char* blob=new unsigned char[64];
We then try using the immediate window
blob+12
0x084F8854
*blob+12: 0x75 'u'
blob+13
0x084F8855
*blob+13: 0x11 ''
blob+14
0x084F8856
*blob+14: 0x94 ''
blob+12,3
0x084F8854
[0]: 0x75 'u'
[1]: 0x0 ''
[2]: 0x0 ''
Why doesn't blob+12,3 display the 3 values for blob 12? What is it doing instead?

More generally, "blob,20" works, but "blob+0,20" does not.
My best guess is it's a bug of managed expressions evaluator. If you look in MSDN, they go in length about how these things don't work and those things don't work. It could be that, in the twisted mind of the evaluator, blob+12 constitutes a 1-element array of type char, therefore elements beyond the first can't be displayed.

The language is C++. He has defined an unsigned char array and watching the variable values using the immediate window. The array name is blob. I tried on VS2008, I verified with a char pointer. when you say blob+12,3.., it is converted to (blob+12)[0],(blob+12)[1],(blob+12)[2]..which is essentially same as blob+13,blob+14 and blob+15 and soon.

Related

What is the meaning of MAKEINTRESOURCE((id>>4)+1)?

I am trying to mimic the behavior of CString::LoadString(HINSTANCE hInst, DWORD id, WORD langID) without introducing a dependency on MFC into my app. So I walked through the source. The first thing it does is to immediately call AtlGetStringResourceImage(hInst, id, langID), and then this in turn contains the following line of code:
hResource = ::FindResourceExW(hInst, (LPWSTR)RT_STRING, MAKEINTRESOURCEW((id>>4)+1), langID);
(It's not verbatim like this, but I trimmed out some unimportant stuff).
What is the meaning of shifting the ID by 4 and adding 1? According to the documentation of FindResourceEx, you should pass in MAKEINTRESOURCE(id), and I can't find any example code that is manipulating the id before passing it to MAKEINTRESOURCE. At the same time, if I make my code call MAKEINTRESOURCE(id) then it doesn't work and FindResourceEx returns null, whereas if I use the above shift + add, then it does work.
Can anyone explain this?
From the STRINGTABLE resource documentation:
RC allocates 16 strings per section and uses the identifier value to determine which section is to contain the string. Strings whose identifiers differ only in the bottom 4 bits are placed in the same section.
The code you are curious about locates the section a given string identifier is stored in by ignoring the low 4 bits.

DW_OP_fbreg 'operand' to retrieve value of a variable is not working as expected?

Setup: I am debugging a simple C++ program compiled with option -fno-omit-frame-pointer, using libwarf for DWARF 5. Main work is to write a debugger using libdwarf.
For a particular local variable, dwarfdump shows:
DW_AT_location len 0x0002: 915c: DW_OP_fbreg -36
In the following I will refer to '-36' as 'op1', which I get from libdwarf.
Problem: Using op1 directly results in incorrect value for the variable.
(fbPointer is current value of frame base pointer).
int32_t data = (int32_t) ptrace(PTRACE_PEEKDATA, processPid, fbPointer + op1, 0);
I also tried decoding -36 as sleb128 and usleb128, and for both I got 220. Not a good value.
Trial/error shows that if I add 16 to op1, it will work for any number of int variables as parameters and local objects. However, it does not work for float/double.
Question: Is -36, as mentioned everywhere, offset of variable from frame-base pointer? If so, what am I doing wrong?
What are the preceding values in DW_AT_location: "len 0x0002: 915c:"? If they are important in evaluating op1, how do I get them via libdwart?
Thank you very much. It has been more than a week I am stuck at this point.
It seems that DW_OP_fbreg is a reference to DWARF register which in this case is 16 bytes off. That is, we need to add 16 to RBP, the real register, then add -36 to that. Finally, somehow in this case -36 is a plain number as opposed to encoded sleb128.

How to split blob into Byte Array In shell script?

I have a blob in postgresql database. Have inserted a C structure into it.
struct temp {
uint64_t a,
uint64_t b,
uint64_t c
};
Now when I write q query in shell for retrieving it.
select resource,.....,blob_column from rtable where rId is=1
I got the result as a blob from database. the result is
x00911ee3561ac801cb0783462586cf01af00000000000000
But now in shell script I need to iterate on this and display the result on console. Tried different things like awe,split , convert_from ,convert function but nothing is helping me.
Can someone tell me how can I read this hex string and get back the integers?
Is this some kind of exersise in programmer-torture? I can't imagine why you'd possibly do this. Not least because your struct-as-a-blob could be subject to padding and alignment that will vary from compiler to compiler and platform to platform. Even then, it'll vary between architectures because of endianness differences. At least you used fixed-width types.
Assuming you only care about little-endian and your compilers don't add any padding or alignment (likely for a struct that's just 3 64-bit fields) it's possible. That doesn't make a great idea.
My preferred approach would be to use some Python code with struct, e.g.
python - "x00911ee3561ac801cb0783462586cf01af00000000000000" <<__END__
import sys
import struct
print "{} {} {}".format(*struct.unpack('#QQQ', sys.argv[1][1:].decode("hex")))
__END__
as this can even handle endianness and packing using appropriate modifiers, and you can easily consume the output in a shell script.
If that's not convenient/suitable, it's also possible in bash, just absolutely horrible. For little-endian, unpadded/packed-unaligned:
To decode each value (adapted from https://stackoverflow.com/a/3678208/398670):
$ x=00911ee3561ac801
$ echo $(( 16#${x:14:2}${x:12:2}${x:10:2}${x:8:2}${x:6:2}${x:4:2}${x:2:2}${x:0:2} ))
so, for the full deal:
x=x00911ee3561ac801cb0783462586cf01af00000000000000
uint64_dec() {
echo $(( 16#${1:14:2}${1:12:2}${1:10:2}${1:8:2}${1:6:2}${1:4:2}${1:2:2}${1:0:2} ))
}
uint64_dec ${x:1:16}
uint64_dec ${x:17:16}
uint64_dec ${x:33:16}
produces:
128381549860000000
130470408871937995
175
Now, I feel dirty and need to go wash. I strongly suggest the following:
CREATE TYPE my_struct AS (a numeric, b numeric, c numeric);
then using my_struct instead of a bytea field. Or just use three numeric columns. You can't use bigint because Pg doesn't have a 64-bit unsigned integer.

following book examples using Unicode Character Set

I'm reading a book called "Introduction to 3D Game Programming with DirectX 9.0c: A Shader Approach" and I was following the codes there but the application used Multi-Byte Character Set and I read from somewhere that it's not a good practice to use that and im having error when creating a window. here is the code that im having error.
mhMainWnd = CreateWindow(L"D3DWndClassName", mMainWndCaption.c_str(), WS_OVERLAPPEDWINDOW,
GetSystemMetrics(SM_CXSCREEN)/2 - width/2,
GetSystemMetrics(SM_CYSCREEN)/2 - height/2,
R.right, R.bottom, 0, 0, mhAppInst, 0);
then the eror is:
error C2664: 'CreateWindowExW' : cannot convert parameter 2 from 'const char [16]' to 'LPCWSTR'
hope someone can help me
What you heard about the preferability of Unicode over the ANSI/MBCS is entirely correct. All new Windows code should be written to work with Unicode. In order to make this happen, you have to ensure two things:
Both the UNICODE and _UNICODE symbols need to be defined globally to ensure that the Unicode versions of the API functions are called, even if you forget the W suffix.
You can either do this at the top of your precompiled header
#define UNICODE
#define _UNICODE
or in your project's Properties window within Visual Studio. Simply add both of the values to the list.
All of your strings (both literals and otherwise) need to be Unicode strings.
With literals, you accomplish this by prefixing them with L, just as you've done in the example: L"D3DWndClassName"
With strings that are allocated at runtime, you need to use the wchar_t type. Since you're using C++, you should obviously be using a string class rather than raw character arrays like you would in C. So you need to use a string class that treats the characters in the string as wchar_t. This would either be std::wstring or MFC/ATL/WTL's CStringW class.
It looks like you've got most of this down already. The culprit is mMainWndCaption.c_str(). You are using std::string (which returns a nul-terminated array of chars) instead of std::wstring (which returns a nul-terminated array of wchar_ts).
Either change your project to ANSI or MBCS rather than UNICODE, then change
L"D3DWndClassName"
to
"D3DWndClassName"
or, leave your project properties as UNICODE but use a UNICODE string of your window caption - so
CString szCaption(mMainWndCaption.c_str()); // CString is actually CStringW in UNICODE build
mhMainWnd = CreateWindow(L"D3DWndClassName", szCaption, WS_OVERLAPPEDWINDOW,
GetSystemMetrics(SM_CXSCREEN)/2 - width/2,
GetSystemMetrics(SM_CYSCREEN)/2 - height/2,
R.right, R.bottom, 0, 0, mhAppInst, 0);

Constant expression violates subrange bounds

I have a project that I'm going to rewrite to another language, and in order to do that - I'd like to build it. But when I try to build it, I receive "E1012: Constant expression violates subrange bounds".
I have such code:
var ForTolkResult : array[0..2000] of char;
ForTolkResult[sizeof(ForTolkResult)-1] := chr(0); // Occurs here
From my point of view everything is correct here, sizeof(ForTolkResult) = 2000 * 1, so sizeof(ForTolkResult) - 1 = 1999, that is in bounds of an array. (But I'm new to Pascal) So what's wrong here?
I'm trying to build it via Embarcadero C++ Builder. If this error is a bug in compiler, how can I turn this check off?
Does char really ocuppy one byte of memory? I mean, check whether it is an "Ansi" single-byte char and not a WideChar.
Anyway, when you need to access the last index of an array, you'd better use
ForTolkResult[High(ForTolkResult)] := chr(0);

Resources