Are the *A Win32 API calls still relevant? - windows

I still see advice about using the LPTSTR/TCHAR types, etc., instead of LPWSTR/WCHAR. I believe the Unicode stuff was well introduced at Win2k, and I frankly don't write code for Windows 98 anymore. (Excepting special cases, of course.) Given that I don't care about Windows 98 (or, even less, ME) as they're decade old OS, is there any reason to use the compatibility TCHAR, etc. types? Why still advise people to use TCHAR - what benefit does it add over using WCHAR directly?

If someone tells you to walk up to 1,000,000 lines of non-_UNICODE C++, with plenty of declarations using char instead of wchar_t or TCHAR or WCHAR, you had better be prepared to cope with the non-Unicode Win32 API. Conversion on a large scale is quite costly, and may not be something the source-o-money is prepared to pay for.
As for new code, well, there's so much example code out there using TCHAR that it may be easier to cut and paste, and there is in some cases some friction between WCHAR as wchar_t and WCHAR as unsigned short.
Who knows, maybe some day MS will add a UTF-32 data type under TCHAR?

Actually, the unicode versions of functions were introduced with Win32 in 1993 with Windows NT 3.1. In fact, on the NT based oses, almost all the *A functions just convert to Unicode and call the *W version internally. Also, support for the *W functions on 9x does exist through Microsoft Layer for Unicode.
For new programs, I would definately recommend using the TCHAR macros or WCHARs directly. I doubt MS will be adding support for any other character sizes during NT's lifetime. For existing code bases, I guess it would depend on how important it is to support Unicode vs cost of fixing it. The *A functions need to stay in Win32 forever for backward compatibility.

Related

feature request: an atomicAdd() function included in gwan.h

In the G-WAN KV options, KV_INCR_KEY will use the 1st field as the primary key.
That means there is a function which increments atomically already built in the G-WAN core to make this primary index work.
It would be good to make this function opened to be used by servlets, i.e. included in gwan.h.
By doing so, ANSI C newbies like me could benefit from it.
There was ample discussion about this on the old G-WAN forum, and people were invited to share their experiences with atomic operations in order to build a rich list of documented functions, platform by platform.
Atomic operations are not portable because they address the CPU directly. It means that the code for Intel x86 (32-bit) and Intel AMD64 (64-bit) is different. Each platform (ARM, Power7, Cell, Motorola, etc.) has its own atomic instruction sets.
Such a list was not published in the gwan.h file so far because basic operations are easy to find (the GCC compiler offers several atomic intrinsics as C extensions) but more sophisticated operations are less obvious (needs asm skills) and people will build them as they need - for very specific uses in their code.
Software Engineering is always a balance between what can be made available at the lowest possible cost to entry (like the G-WAN KV store, which uses a small number of functions) and how it actually works (which is far less simple to follow).
So, beyond the obvious (incr/decr, set/get), to learn more about atomic operations, use Google, find CPU instruction sets manuals, and arm yourself with courage!
Thanks for Gil's helpful guidance.
Now, I can do it by myself.
I change the code in persistence.c, as below:
firstly, i changed the definition of val in data to volatile.
//data[0]->val++;
//xbuf_xcat(reply, "Value: %d", data[0]->val);
int new_count, loops=50000000, time1, time2, time;
time1=getus();
for(int i; i<loops; i++){
new_count = __sync_add_and_fetch(&data[0]->val, 1);
}
time2=getus();
time=loops/(time2-time1);
time=time*1000;
xbuf_xcat(reply, "Value: %d, time: %d incr_ops/msec", new_count, time);
I got 52,000 incr_operations/msec with my old E2180 CPU.
So, with GCC compiler I can do it by myself.
thanks again.

Can I assume sizeof(GUID)==16 at all times?

The definition of GUID in the windows header's is like this:
typedef struct _GUID {
unsigned long Data1;
unsigned short Data2;
unsigned short Data3;
unsigned char Data4[ 8 ];
} GUID;
However, no packing is not defined. Since the alignment of structure members is dependent on the compiler implementation one could think this structure could be longer than 16 bytes in size.
If i can assume it is always 16 bytes - my code using GUIDs is more efficient and simple.
However, it would be completely unsafe - if a compiler adds some padding in between of the members for some reason.
My questions do potential reasons exist ? Or is the probability of the scenario that sizeof(GUID)!=16 actually really 0.
It's not official documentation, but perhaps this article can ease some of your fears. I think there was another one on a similar topic, but I cannot find it now.
What I want to say is that Windows structures do have a packing specifier, but it's a global setting which is somewhere inside the header files. It's a #pragma or something. And it is mandatory, because otherwise programs compiled by different compilers couldn't interact with each other - or even with Windows itself.
It's not zero, it depends on your system. If the alignment is word (4-bytes) based, you'll have padding between the shorts, and the size will be more than 16.
If you want to be sure that it's 16 - manually disable the padding, otherwise use sizeof, and don't assume the value.
If I feel I need to make an assumption like this, I'll put a 'compile time assertion' in the code. That way, the compiler will let me know if and when I'm wrong.
If you have or are willing to use Boost, there's a BOOST_STATIC_ASSERT macro that does this.
For my own purposes, I've cobbled together my own (that works in C or C++ with MSVC, GCC and an embedded compiler or two) that uses techniques similar to those described in this article:
http://www.pixelbeat.org/programming/gcc/static_assert.html
The real tricks to getting the compile time assertion to work cleanly is dealing with the fact that some compilers don't like declarations mixed with code (MSVC in C mode), and that the techniques often generate warnings that you'd rather not have clogging up an otherwise working build. Coming up with techniques that avoid the warnings is sometimes a challenge.
Yes, on any Windows compiler. Otherwise IsEqualGUID would not work: it compares only the first 16 bytes. Similarly, any other WinAPI function that takes a GUID* just checks the first 16 bytes.
Note that you must not assume generic C or C++ rules for windows.h. For instance, a byte is always 8 bits on Windows, even though ISO C allows 9 bits.
Anytime you write code dependent on the size of someone else's structure,
warning bells should go off.
Could you give an example of some of the simplified code you want to use?
Most people would just use sizeof(GUID) if the size of the structure was needed.
With that said -- I can't see the size of GUID ever changing.
#include <stdio.h>
#include <rpc.h>
int main () {
GUID myGUID;
printf("size of GUID is %d\n", sizeof(myGUID));
return 0;
}
Got 16. This is useful to know if you need to manually allocate on the heap.

Some Windows API calls fail unless the string arguments are in the system memory rather than local stack

We have an older massive C++ application and we have been converting it to support Unicode as well as 64-bits. The following strange thing has been happening:
Calls to registry functions and windows creation functions, like the following, have been failing:
hWnd = CreateSysWindowExW( ExStyle, ClassNameW.StringW(), Label2.StringW(), Style,
Posn.X(), Posn.Y(),
Size.X(), Size.Y(),
hParentWnd, (HMENU)Id,
AppInstance(), NULL);
ClassNameW and Label2 are instances of our own Text class which essentially uses malloc to allocate the memory used to store the string.
Anyway, when the functions fail, and I call GetLastError it returns the error code for "invalid memory access" (though I can inspect and see the string arguments fine in the debugger). Yet if I change the code as follows then it works perfectly fine:
BSTR Label2S = SysAllocString(Label2.StringW());
BSTR ClassNameWS = SysAllocString(ClassNameW.StringW());
hWnd = CreateSysWindowExW( ExStyle, ClassNameWS, Label2S, Style,
Posn.X(), Posn.Y(),
Size.X(), Size.Y(),
hParentWnd, (HMENU)Id,
AppInstance(), NULL);
SysFreeString(ClassNameWS); ClassNameWS = 0;
SysFreeString(Label2S); Label2S = 0;
So what gives? Why would the original functions work fine with the arguments in local memory, but when used with Unicode, the registry function require SysAllocString, and when used in 64-bit, the Windows creation functions also require SysAllocString'd string arguments? Our Windows procedure functions have all been converted to be Unicode, always, and yes we use SetWindowLogW call the correct default Unicode DefWindowProcW etc. That all seems to work fine and handles and draws Unicode properly etc.
The documentation at http://msdn.microsoft.com/en-us/library/ms632679%28v=vs.85%29.aspx does not say anything about this. While our application is massive we do use debug heaps and tools like Purify to check for and clean up any memory corruption. Also at the time of this failure, there is still only one main system thread. So it is not a thread issue.
So what is going on? I have read that if string arguments are marshalled anywhere or passed across process boundaries, then you have to use SysAllocString/BSTR, yet we call lots of API functions and there is lots of code out there which calls these functions just using plain local strings?
What am I missing? I have tried Googling this, as someone else must have run into this, but with little luck.
Edit 1: Our StringW function does not create any temporary objects which might go out of scope before the actual API call. The function is as follows:
Class Text {
const wchar_t* StringW () const
{
return TextStartW;
}
wchar_t* TextStartW; // pointer to current start of text in DataArea
I have been running our application with the debug heap and memory checking and other diagnostic tools, and found no source of memory corruption, and looking at the assembly, there is no sign of temporary objects or invalid memory access.
BUT I finally figured it out:
We compile our code /Zp1, which means byte aligned memory allocations. SysAllocString (in 64-bits) always return a pointer that is aligned on a 8 byte boundary. Presumably a 32-bit ANSI C++ application goes through an API layer to the underlying Unicode windows DLLs, which would also align the pointer for you.
But if you use Unicode, you do not get that incidental pointer alignment that the conversion mapping layer gives you, and if you use 64-bits, of course the situation will get even worse.
I added a method to our Text class which shifts the string pointer so that it is aligned on an eight byte boundary, and viola, everything runs fine!!!
Of course the Microsoft people say it must be memory corruption and I am jumping the wrong conclusion, but there is evidence it is not the case.
Also, if you use /Zp1 and include windows.h in a 64-bit application, the debugger will tell you sizeof(BITMAP)==28, but calling GetObject on a bitmap will fail and tell you it needs a 32-byte structure. So I suspect that some of Microsoft's API is inherently dependent on aligned pointers, and I also know that some optimized assembly (I have seen some from Fortran compilers) takes advantage of that and crashes badly if you ever give it unaligned pointers.
So the moral of all of this is, dont use "funky" compiler arguments like /Zp1. In our case we have to for historical reasons, but the number of times this has bitten us...
Someone please give me a "this is useful" tick on my answer please?
Using a bit of psychic debugging, I'm going to guess that the strings in your application are pooled in a read-only section.
It's possible that the CreateSysWindowsEx is attempting to write to the memory passed in for the window class or title. That would explain why the calls work when allocated on the heap (SysAllocString) but not when used as constants.
The easiest way to investigate this is to use a low level debugger like windbg - it should break into the debugger at the point where the access violation occurs which should help figure out the problem. Don't use Visual Studio, it has a nasty habit of being helpful and hiding first chance exceptions.
Another thing to try is to enable appverifier on your application - it's possible that it may show something.
Calling a Windows API function does not cross the process boundary, since the various Windows DLLs are loaded into your process.
It sounds like whatever pointer that StringW() is returning isn't valid when Windows is trying to access it. I would look there - is it possible that the pointer returned it out of scope and deleted shortly after it is called?
If you share some more details about your string class, that could help diagnose the problem here.

Why did POCO chose to use Posix semaphores for OSX?

I am a new bee to MAC/OSX. I am working on Titanium a cross platform runtime, which uses POCO library for most of the portable C++ API. I see that POCO uses POSIX semaphore for its NamedMutex implementation on OSX as opposed to SysV semaphore that it is using for few other *NIX.
bool NamedMutexImpl::tryLockImpl()
{
#if defined(sun) || defined(__APPLE__) || defined(__osf__) || defined(__QNX__) || defined(_AIX)
return sem_trywait(_sem) == 0;
#else
struct sembuf op;
op.sem_num = 0;
op.sem_op = -1;
op.sem_flg = SEM_UNDO | IPC_NOWAIT;
return semop(_semid, &op, 1) == 0;
#endif
}
For few searches, I see that SysV sem_* API is supported on OSX as well: http://www.osxfaq.com/man/2/semop.ws. Any Idea, why POCO developers chose to use POSIX API on OSX?
I am particularly intested in SEM_UNDO functionality in the above call, which the POSIX semaphores can't give.
Any Idea, why POCO developers chose to use POSIX API on OSX?
That seems to be rather arbitrary decision on part of POCO developers: both semaphores do not really match the Windows' named semaphores (after which they are apparently crafted). There is no semaphore on POSIX which has its own symbolic namespace akin to a file system. (SysV sems have namespace made of integer ids, but no symbolic names.)
If the posted code really comes from the library, I can only advise to stop relying on the library for portability. Well, at least with the semaphores you apparently have to start your own implementation already.
Edit1. Check how the semaphores are implemented for Windows. It's common for such libraries to use Windows' critical sections. Then the POSIX sem_t is a proper match. You need SEM_UNDO only if the semaphore is accessed by several processes - it doesn't work for threads. I.e. undo happens when the process crashes. Though the fact that on Linux they use SysV is quite troubling. SysV semaphores are global and thus have OS limit (can be changed on run-time) on their number - while sem_t semaphores are local to the process, are just a structure in private memory and are limited only by the amount of local memory process can allocate.
P.S. Reluctantly. Real reason might be that the POCO main development takes place on Windows (usual to the "portable libraries"; they are "portable to Windows" so to say trying to make *NIX look like Windows). UNIX implementation is very very often an afterthought, implemented by somebody who has seen terminal screen from few meters away and never read the man page further the function prototype. That was my personal experience with couple of such "portable libraries" in the past.

DWORD_PTR, INT_PTR, LONG_PTR, UINT_PTR, ULONG_PTR When, How and Why?

I found that Windows has some new Windows Data Types
DWORD_PTR, INT_PTR, LONG_PTR, UINT_PTR, ULONG_PTR
can you tell me when, how and why to use them?
The *_PTR types were added to the Windows API in order to support Win64's 64bit addressing.
Because 32bit APIs typically passed pointers using data types like DWORD, it was necessary to create new types for 64 bit compatibility that could substitute for DWORD in 32bit applications, but were extended to 64bits when used in a 64bit applications.
So, for example, application developers who want to write code that works as 32bit OR 64bit the windows 32bit API SetWindowLong(HWND,int,LONG) was changed to SetWindowLongPtr(HWND,int,LONG_PTR)
In a 32bit build, SetWindowLongPtr is simply a macro that resolves to SetWindowLong, and LONG_PTR is likewise a macro that resolves to LONG.
In a 64bit build on the other hand, SetWindowLongPtr is an API that accepts a 64bit long as its 3rd parameter, and ULONG_PTR is typedef for unsigned __int64.
By using these _PTR types, one codebase can compile for both Win32 and Win64 targets.
When performing pointer arithmetic, these types should also be used in 32bit code that needs to be compatible with 64bit.
so, if you need to access an array with more than 4billion elements, you would need to use an INT_PTR rather than an INT
CHAR* pHuge = new CHAR[0x200000000]; // allocate 8 billion bytes
INT idx;
INT_PTR idx2;
pHuge[idx]; // can only access the 1st 4 billion elements.
pHuge[idx2]; // can access all 64bits of potential array space.
Chris Becke is pretty much correct. Its just worth noting that these _PTR types are just types that are 32-bits wide on a 32-bit app and 64-bits wide on a 64-bit app. Its as simple as that.
You could easily use __int3264 instead of INT_PTR for example.

Resources