Weired char behavior in C++ / MSVC2005 / Windows 7 64 bit - windows-7

I am using windows 7 64 bit with MSVC2005 and QT (but I doubt QT is causing the problem since this is an issue with the fundamental data type char.
So when I try to compare two char's like so
char A=0xAA;
if(A==0xAA)
printf("Success");
else
printf("Fail");
lo and behold it fails! but when I do this
char A=0xAA;
char B=0xAA;
if(A==B)
printf("Success");
else
printf("Fail");
I get success! Actually when I thought about it... hey I'm working on a 64 bit processor.. even though char's are supposed to be treated as 1 byte. It's probablly stored as 4 bytes.
So
char A=0xAA;
if(A==0xFFFFFFAA)
printf("Success");
else
printf("Fail");
Now I get success!!!
But WTF! Is this standard behavior!! If the damn thing is defined as a char, shouldn't the compiler know what to do with it? Further tests show that the extra bytes are only stored as one's if the most significant bit of the char is a 1. So 0x07 and lower is stored as 0x00000007. WTF.
Actually I seemed to have answered all my questions... except who to call to get this bug fixed. Is this even a bug? You can use MSVC2005 on 64 bit operating systems right or am I being an idiot. I guess I should get qt creator to use MSVC2010.. damn it. There goes my 2 hours.

You are comparing a (signed) char with the value -86 (256-0xAA) to an integer with the value 170 (0xAA).
The same will happen on a 32-bit system, and an 8-bit system, for that matter.

Not related to 64 bit: you need to define A as unsigned char to get correct behavior. Compiler warning shows that this code may be incorrect:
warning C4309: 'initializing' : truncation of constant value

Related

CArray MFC Serialization multiplatform, 16, 32 and 64 bit

I'm working on very old legacy code and I'm porting it from 32 to 64 bit.
One of the things where I'm struggling was about the MFC serialization. One of the difference between 32 and 64 bit was the size of pointer data. This means, for example, that if for some reason I have serialized the size of a CArray like
ar << m_array.GetSize();
the data was different between 32 and 64 platform because GetSize return a INT_PTR. To get serialize data fully compatible with the same application compiled in 32 and 64 bit, I forced the data type in the storing phase, and the same on reading. (pretty sure 32 bit are enough for this data)
store
ar << (int)m_array.GetSize();
reading
int iNumSize = 0;
ar >> iNumSize ;
In other word, the application, does't matter if compiled in 32 or 64 bits, serialize this data like as int.
Now I have one doubt about the serialization of the CArray type; to serialize a CArray the code use the built CArchive serialization
//defined as CArray m_arrayVertex; on .h
m_arrayVertex.Serialize(ar);
and this Serialize is defined in the MFC file afxtemp.h with this template
template<class TYPE, class ARG_TYPE>
void CArray<TYPE, ARG_TYPE>::Serialize(CArchive& ar)
{
ASSERT_VALID(this);
CObject::Serialize(ar);
if (ar.IsStoring())
{
ar.WriteCount(m_nSize);
}
else
{
DWORD_PTR nOldSize = ar.ReadCount();
SetSize(nOldSize, -1);
}
SerializeElements<TYPE>(ar, m_pData, m_nSize);
}
where (afx.h)
// special functions for reading and writing (16-bit compatible) counts
DWORD_PTR ReadCount();
void WriteCount(DWORD_PTR dwCount);
Here my question: ReadCount and WriteCount use the DWORD_PTR that have different size between platforms... this kind of serialization is compatible at 32/64 bit or, due to the size change, the serialized data work only in each platform respectively?
I mean the data can be read by both the 32 and 64 application without errors? the comment say it works also for "16 bit" and I not found anything in details about this serialization.
If this does't work, there is a workaround to serialize the CArray in such a way the data are fully compatible with both 32 and 64 app?
Edit: Both of answer are good. I simply accept as the solution the first come. Many thanks to both, hope can help someone else!
As you have written, ReadCount returns a DWORD_PTR which is either 32 bit or 64 bits wide depending if the code has been compiled as 32 or 64 bit code.
Now as long as the actual object count fits into 32 bits, there is no problem with interoperability between files that have been written by a 32 bit or a 64 bit program.
On the other hand if your 64 bit code serializes a CArray that has more than 4294967295 elements (which is unlikely to happen anyway), then you will run into trouble if you want to read deserialize this file from a 32 bit program. But on a 32 bit program a CArray cannot store more than 4294967295 anyway.
Long story short meaning, you don't need to do anything special, just serialize/deserialize your data.
Storage and retrieval of the item count for CArray instantiations are implemented in CArchive::WriteCount and CArchive::ReadCount, respectively.
They write and read a 16-bit (WORD), 32-bit (DWORD), or 64-bit (on 64-bit platforms, DWORD_PTR) value to or from the stream. Writing uses the following algorithm:
If the item count is less than 0xFFFF, write the item count as a 16-bit WORD value
Otherwise, dump an "invalid value" marker ((WORD)0xFFFF) into the stream, followed by
32-bit: The item count as a 32-bit value (DWORD)
64-bit: If the item count is less than 0xFFFF'FFFF, write the item count as a 32-bit DWORD value
Otherwise, dump an "invalid value" marker ((DWORD)0xFFFFFFFF) into the stream, followed by the item count as a 64-bit value (DWORD_PTR)
The stream layout is summarized in the following table depending on the item count in the CArray (where ❌ denotes a value that's not present in the stream):
Item count n
WORD
DWORD
DWORD_PTR
n < 0xFFFF
n
❌
❌
0xFFFF <= n < 0xFFFF'FFFF
0xFFFF
n
❌
n == 0xFFFF'FFFF (32-bit only)
0xFFFF
0xFFFF'FFFF
❌
0xFFFF'FFFF <= n (64-bit only)
0xFFFF
0xFFFF'FFFF
n
When deserializing the stream the code reads the item count value, checks to see if it matches the "invalid value" marker, and continues with larger values if a marker was found.
This works across bitnesses as long as the CArray holds no more than 0xFFFF'FFFE values. For 32-bit platforms this is always true; you cannot have a CArray that uses up the entire address space.
When serializing from a 64-bit process you just need to make sure that there aren't any more than 0xFFFF'FFFE items in the array.
Summary:
For CArrays with less than 0xFFFF'FFFF (4294967295) items, the serialized stream is byte-for-byte identical regardless of whether it was created on a 32-bit platform or a 64-bit platform.
There's the odd corner case of a CArray with exactly 0xFFFF'FFFF items on a 32-bit platform1. If that were to be streamed out and read back in on a 64-bit platform, the size field in the stream would be mistaken for the "invalid value" marker, with catastrophic consequences. Luckily, that is not something we need to worry about. 32-bit processes cannot allocate containers that are a multiple of available address space in size.
That covers the scenario where a stream serialized on a 32-bit platform is consumed on a 64-bit platform. Everything works as designed, in practice.
On to the other direction then: A stream created on a 64-bit platform to be deserialized on a 32-bit platform. The only relevant disagreement here is containers larger than what a 32-bit program could even represent. The 64-bit serializer will drop an "invalid value" marker (DWORD) followed by the actual item count (DWORD_PTR)2. The 32-bit deserializer will assume that the marker (0xFFFF'FFFF) is the true item count, and fail the subsequent memory allocation without ever looking at the actual item count. Things are torn down from there using whatever exception handling is in place, before any data corruption can happen3.
This is not a novel error mode, unique to cross-bitness interoperability, though. A CArray serialized on a 32-bit platform can fail to be deserialized on a 32-bit platform just as well, if the process runs out of resources. This can happen far earlier than running out of memory, since CArrays need contiguous memory.
1 Line 3 in the table above.
2 Line 4 in the table above.
3 This is assuming there's no catch(...) up the call stack that just keeps ignoring.

What win64 calls require alignment?

I was trying to figure out a port bug from win32 to win64 where a LB_GETSELITEMS message was returning a -1 in the 64 bit port, but not in the original 32 bit environment.
My head about exploded when I finally realized that the LB_GETSELITEMS requires that the lParam cannot be 2 byte aligned, but must be 8 byte (maybe 4 byte?) aligned.
I've not seen this documented anywhere. Does anyone know of any documentation related to this? Are there any other places where this is a problem?

Can I assume sizeof(GUID)==16 at all times?

The definition of GUID in the windows header's is like this:
typedef struct _GUID {
unsigned long Data1;
unsigned short Data2;
unsigned short Data3;
unsigned char Data4[ 8 ];
} GUID;
However, no packing is not defined. Since the alignment of structure members is dependent on the compiler implementation one could think this structure could be longer than 16 bytes in size.
If i can assume it is always 16 bytes - my code using GUIDs is more efficient and simple.
However, it would be completely unsafe - if a compiler adds some padding in between of the members for some reason.
My questions do potential reasons exist ? Or is the probability of the scenario that sizeof(GUID)!=16 actually really 0.
It's not official documentation, but perhaps this article can ease some of your fears. I think there was another one on a similar topic, but I cannot find it now.
What I want to say is that Windows structures do have a packing specifier, but it's a global setting which is somewhere inside the header files. It's a #pragma or something. And it is mandatory, because otherwise programs compiled by different compilers couldn't interact with each other - or even with Windows itself.
It's not zero, it depends on your system. If the alignment is word (4-bytes) based, you'll have padding between the shorts, and the size will be more than 16.
If you want to be sure that it's 16 - manually disable the padding, otherwise use sizeof, and don't assume the value.
If I feel I need to make an assumption like this, I'll put a 'compile time assertion' in the code. That way, the compiler will let me know if and when I'm wrong.
If you have or are willing to use Boost, there's a BOOST_STATIC_ASSERT macro that does this.
For my own purposes, I've cobbled together my own (that works in C or C++ with MSVC, GCC and an embedded compiler or two) that uses techniques similar to those described in this article:
http://www.pixelbeat.org/programming/gcc/static_assert.html
The real tricks to getting the compile time assertion to work cleanly is dealing with the fact that some compilers don't like declarations mixed with code (MSVC in C mode), and that the techniques often generate warnings that you'd rather not have clogging up an otherwise working build. Coming up with techniques that avoid the warnings is sometimes a challenge.
Yes, on any Windows compiler. Otherwise IsEqualGUID would not work: it compares only the first 16 bytes. Similarly, any other WinAPI function that takes a GUID* just checks the first 16 bytes.
Note that you must not assume generic C or C++ rules for windows.h. For instance, a byte is always 8 bits on Windows, even though ISO C allows 9 bits.
Anytime you write code dependent on the size of someone else's structure,
warning bells should go off.
Could you give an example of some of the simplified code you want to use?
Most people would just use sizeof(GUID) if the size of the structure was needed.
With that said -- I can't see the size of GUID ever changing.
#include <stdio.h>
#include <rpc.h>
int main () {
GUID myGUID;
printf("size of GUID is %d\n", sizeof(myGUID));
return 0;
}
Got 16. This is useful to know if you need to manually allocate on the heap.

how to do an atomic copy of 128-bit number in gcc inline x86_64 asm?

I haven't done assembly since school (eons ago) and have never done any x86, but I have found a pesky bug in old existing code where somebody isn't doing an atomic op where they should be. The person who wrote the code is long gone and nobody around here knows the answer to my question. What I need to do is create an atomic copy for 128-bit values. The code I currently have is as follows:
void atomic_copy128(volatile void* dest,volatile const void* source) {
#if PLATFORM_BITS == 64
#ifdef __INTEL_COMPILER
//For IA64 platform using intel compiler
*((__int64*)source)=__load128((__int64*)dest,((__int64*)source)+1);
#else
//For x86_64 compiled with gcc
__asm__ __volatile__("lock ; movq %0,%1"
: "=r"(*((volatile long *)(source)))
: "r"(*((volatile long *)(dest)))
: "memory");
#endif
#else
#error "128 bit operations not supported on this platform."
#endif
}
This isn't the code that I originally tried, since I've messed with it quite a bit while trying get it to compile and run. When I make it a totally invalid instruction, it does not compile. When I run this, it executes until it hits this line and then generates a "Illegal instruction" error message. I'd appreciate any help.
As far as I know, "movq" supports at most one memory operand, and its arguments are of 64-bit size anyway, so even if two memory operands were supported, it still wouldn't give you that atomic 128-bit copy you're looking for.
For windows:
::memset( dst, -1, 16 );
_InterlockedCompareExchange128(source, -1, -1, dst );
(but const must be deleted)
For other use cmpxchg16b instruction

CreateThread() fails on 64 bit Windows, works on 32 bit Windows. Why?

Operating System: Windows XP 64 bit, SP2.
I have an unusual problem. I am porting some code from 32 bit to 64 bit. The 32 bit code works just fine. But when I call CreateThread() for the 64 bit version the call fails. I have three places where this fails. 2 call CreateThread(). 1 calls beginthreadex() which calls CreateThread().
All three calls fail with error code 0x3E6, "Invalid access to memory location".
The problem is all the input parameters are correct.
HANDLE h;
DWORD threadID;
h = CreateThread(0, // default security
0, // default stack size
myThreadFunc, // valid function to call
myParam, // my param
0, // no flags, start thread immediately
&threadID);
All three calls to CreateThread() are made from a DLL I've injected into the target program at the start of the program execution (this is before the program has got to the start of main()/WinMain()). If I call CreateThread() from the target program (same params) via say a menu, it works. Same parameters etc. Bizarre.
If I pass NULL instead of &threadID, it still fails.
If I pass NULL as myParam, it still fails.
I'm not calling CreateThread from inside DllMain(), so that isn't the problem. I'm confused and searching on Google etc hasn't shown any relevant answers.
If anyone has seen this before or has any ideas, please let me know.
Thanks for reading.
ANSWER
Short answer: Stack Frames on x64 need to be 16 byte aligned.
Longer answer:
After much banging my head against the debugger wall and posting responses to the various suggestions (all of which helped in someway, prodding me to try new directions) I started exploring what-ifs about what was on the stack prior to calling CreateThread(). This proved to be a red-herring but it did lead to the solution.
Adding extra data to the stack changes the stack frame alignment. Sooner or later one of the tests gets you to 16 byte stack frame alignment. At that point the code worked. So I retraced my steps and started putting NULL data onto the stack rather than what I thought was the correct values (I had been pushing return addresses to fake up a call frame). It still worked - so the data isn't important, it must be the actual stack addresses.
I quickly realised it was 16 byte alignment for the stack. Previously I was only aware of 8 byte alignment for data. This microsoft document explains all the alignment requirements.
If the stackframe is not 16 byte aligned on x64 the compiler may put large (8 byte or more) data on the wrong alignment boundaries when it pushes data onto the stack.
Hence the problem I faced - the hooking code was called with a stack that was not aligned on a 16 byte boundary.
Quick summary of alignment requirements, expressed as size : alignment
1 : 1
2 : 2
4 : 4
8 : 8
10 : 16
16 : 16
Anything larger than 8 bytes is aligned on the next power of 2 boundary.
I think Microsoft's error code is a bit misleading. The initial STATUS_DATATYPE_MISALIGNMENT could be expressed as a STATUS_STACK_MISALIGNMENT which would be more helpful. But then turning STATUS_DATATYPE_MISALIGNMENT into ERROR_NOACCESS - that actually disguises and misleads as to what the problem is. Very unhelpful.
Thank you to everyone that posted suggestions. Even if I disagreed with the suggestions, they prompted me to test in a wide variety of directions (including the ones I disagreed with).
Written a more detailed description of the problem of datatype misalignment here: 64 bit porting gotcha #1! x64 Datatype misalignment.
The only reason that 64bit would make a difference is that threading on 64bit requires 64bit aligned values. If threadID isn't 64bit aligned, you could cause this problem.
Ok, that idea's not it. Are you sure it's valid to call CreateThread before main/WinMain? It would explain why it works in a menu- because that's after main/WinMain.
In addition, I'd triple-check the lifetime of myParam. CreateThread returns (this I know from experience) long before the function you pass in is called.
Post the thread routine's code (or just a few lines).
It suddenly occurs to me: Are you sure that you're injecting your 64bit code into a 64bit process? Because if you had a 64bit CreateThread call and tried to inject that into a 32bit process running under WOW64, bad things could happen.
Starting to seriously run out of ideas. Does the compiler report any warnings?
Could the bug be due to a bug in the host program, rather than the DLL? There's some other code, such as loading a DLL if you used __declspec(import/export), that occurs before main/WinMain. If that DLLMain, for example, had a bug in it.
I ran into this issue today. And I checked every argument feed into _beginthread/CreateThread/NtCreateThread via rohitab's Windows API Monitor v2. Every argument is aligned properly (AFAIK).
So, where does STATUS_DATATYPE_MISALIGNMENT come from?
The first few lines of NtCreateThread validate parameters passed from user mode.
ProbeForReadSmallStructure (ThreadContext, sizeof (CONTEXT), CONTEXT_ALIGN);
for i386
#define CONTEXT_ALIGN (sizeof(ULONG))
for amd64
#define STACK_ALIGN (16UI64)
...
#define CONTEXT_ALIGN STACK_ALIGN
On amd64, if the ThreadContext pointer is not aligned to 16 bytes, NtCreateThread will return STATUS_DATATYPE_MISALIGNMENT.
CreateThread (actually CreateRemoteThread) allocated ThreadContext from stack, and did nothing special to guarantee the alignment requirement is satisfied. Things will work smoothly if every piece of your code followed Microsoft x64 calling convention, which unfortunately not true for me.
PS: The same code may work on newer Windows (say Vista and newer). I didn't check though. I'm facing this issue on Windows Server 2003 R2 x64.
I'm in the business of using parallel threads under windows
for calculations. No funny business, no dll-calls, and certainly
no call-back's. The following works in 32 bits windows. I set up the stack for my calculation, well within the area reserved for my program.
All releveant data about area's and start addresses is contained in
a data structure that is passed to CreateThread as parameter 3.
The address that is called contains a small assembler routine
that uses this data stucture.
Indeed this routine finds the address to return to on the stack,
then the address of the data structure.
There is no reason to go far into this. It just works and it calculates
the number of primes below 2,000,000,000 just fine, in one thread,
in two threads or in 20 threads.
Now CreateThread in 64 bits doesn't push the address of the data
structure. That seems implausible so I show you the smoking gun,
a dump of a debug session.
In the subwindow at the bottom right you see the stack, and
there is merely the return address, amidst a sea of zeroes.
The mechanism I use to fill in parameters is portable between 32 and 64 bits.
No other call exhibits a difference between word-sizes.
Moreover why would the code address work but not the data address?
The bottom line: one would expect that CreateThread passes the data parameter on the stack in the same way in 64 bits as in 32 bits, then does a subroutine call. At the assembler level it doesn't work that way. If there are any hidden requirements to e.g. RSP that are automatically fullfilled in C++ that would be very nasty.
P.S. No there are no 16 byte alignment problems. That lies ages behind me.
Try using _beginthread() or _beginthreadex() instead, you shouldn't be using CreateThread directly.
See this previous question.

Resources