What are the various differences between the two symbols TCHAR and _TCHAR type defined in the Windows header tchar.h? Explain with examples. Briefly describe scenarios where you would use TCHAR as opposed to _TCHAR in your code. (10 marks)
In addition to what #RussC said, TCHAR is used by the Win32 API and is based on the UNICODE define, whereas _TCHAR is used by the C runtime and is based on the _UNICODE define instead. UNICODE and _UNICODE are usually defined/omitted together, making TCHAR and _TCHAR interchangable, but that is not a requirement. They are semantically separated for use by different frameworks.
Found your answer over here:
MSDN Forums >> Visual Studio Developer Center >> TCHAR vs _TCHAR
TCHAR and _TCHAR are identical, although since TCHAR doesn't have a
leading underscore, Microsoft aren't allowed to reserved it as a
keyword (imagine if you had a variable called TCHAR. Think what would
happen). Hence TCHAR will not be #defined when Language Extensions are
disabled (/Za).
TCHAR is defined in winnt.h (which you'll get when you #include
), and also tchar.h under /Ze.
_TCHAR is available only in tchar.h (which also #defines _TSCHAR and _TUCHAR). Those are unsigned/signed variants of the normal TCHAR data type.
Related
Working on a Qt app on Windows. I include QVboxLayout in my source file only and this causes errors because its macro overwrites my method name.
foo.hpp
class foo
{
ChangeMenu();
}
foo.cpp
#include "foo.hpp"
#include "QVBoxLayout" // <--- this includes winuser.h
foo::ChangeMenu(){};
Now what happens is winuser.h has a macro
#ifdef UNICODE
#define ChangeMenu ChangeMenuW
#else
#define ChangeMenu ChangeMenuA
#endif // !UNICODE
This changes my function definition to ChangeMenuW but my declaration is still ChangeMenu.
How should I solve this? How can winuser.h define such a "normal" name as a macro?
Version of winuser.h is "windows kits\10\include\10.0.16299.0"
Pretty much any Windows API that deals with strings is actually a macro that resolves to a A or W version. There's no way around, you can either:
avoid including windows.h, but as you noticed, it creeps through;
brutally #undef the macro before defining/using your function; this is a fit punishment for hoarding such normal and non-macro-looking identifiers, but is tedious and some other code may actually need the Win32 function;
just accept it as a sad fact of life and avoid all the relevant Win32 APIs names; if you use Qt and follow its naming convention, it should be easy, as Qt functions use lowerCamelCase (as opposed to Win32 UpperCamelCase);
include windows.h explicitly straight in your header (possibly under an #ifdef _WIN32); this will make sure that your identifier will get replaced by the macro in all instances, so everything will work fine even if the compiler will actually compile a function with a different name; suitable for standalone projects, not suitable for libraries. (Thanks #Jonathan Potter for suggesting this)
You could take no care about this issue, Although your method name will be the same as the windows API, but the system will not mix them(just unify Unicode on both the place to define/call). If you call the ChangeMenu() directly, you will call the winapi, and if
foo f;
f.ChangeMenu();
or
foo::ChangeMenu();(static)
You will call your method.
And if you want to disable the winapi:
#ifdef ChangeMenu
#undef ChangeMenu
//code place that you define/call your own ChangeMenu().
#ifdef UNICODE
#define ChangeMenu ChangeMenuW
#else
#define ChangeMenu ChangeMenuA
#endif // !UNICODE
#endif
(It looks very tedious.)
I am reading about Win32 programming with C/C++ and came across a page which defines the WinMain as:
int WINAPI wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, PWSTR pCmdLine, int nCmdShow);
I understand most of this, except I do not understand where did the part WINAPI come from?
I know that this is a macro for a calling convention. That's not what I am seeking clarity on. My question is not on calling conventions.
When I look at Microsoft's documentation on C++ functions, and I read through optional parts of a function declaration, I don't see any mention of including a calling convention at any place in the function declaration. So where exactly does Microsoft in their documentation speak about including calling conventions in a function declaration?
The portion of Microsoft's documentation that you linked to refers to only standard components of the C++ language. Calling conventions are not part of the C++ specification.
The C++ specification describes how a function can declare its return type and parameters, but it does not define how those values are actually passed between caller and callee. Calling conventions dictate that, and different compilers/platforms implement calling conventions in their own ways. So the C++ specification doesn't describe calling conventions.
In Microsoft's documentation, calling conventions are referred to as Microsoft-Specific Modifiers to the C++ language. Which is technically correct, as any identifier that begins with 1-2 underscores in its name is a vendor-specific extension, and all of the known calling conventions begin with underscores in their names, eg:
__cdecl
__stdcall
__fastcall
__thiscall
__safecall
__vectorcall
__pascal
__fortran
__syscall
etc...
Macros like WINAPI, STDMETHODCALL, etc simply map to a specific calling convention (usually __stdcall, but sometimes __cdecl).
If omitted in a function declaration, the compiler decides which calling convention it wants to use (usually __cdecl).
Compilers from different vendors are not required to implement each others extensions. However, in the case of calling conventions, most compilers at least implement __cdecl and __stdcall, and agree on how they should work, for code portability. But make no mistake, calling conventions are still a vendor-specific extension to the standard language specification.
I am trying to understand the purpose of using Windows Data Types when defining parameters of a function/structure fields in a particular language. I've read explanations detailing how this prevents code from "breaking" if "underlying types" are changed. Can some one present a concise explanation and example to clarify? Thanks.
Found answer in a similar post (Why are the standard datatypes not used in Win32 API?):
And the reason that these types are defined the way they are, rather than using int, char and so on is that it removes the "whatever the compiler thinks an int should be sized as" from the interface of the OS. Which is a very good thing, because if you use compiler A, or compiler B, or compiler C, they will all use the same types - only the library interface header file needs to do the right thing defining the types.
By defining types that are not standard types, it's easy to change int from 16 to 32 bit, for example. The first C/C++ compilers for Windows were using 16-bit integers. It was only in the mid to late 1990's that Windows got a 32-bit API, and up until that point, you were using int that was 16-bit. Imagine that you have a well-working program that uses several hundred int variables, and all of a sudden, you have to change ALL of those variables to something else... Wouldn't be very nice, right - especially as SOME of those variables DON'T need changing, because moving to a 32-bit int for some of your code won't make any difference, so no point in changing those bits.
It should be noted that WCHAR is NOT the same as const char - WCHAR is a "wide char" so wchar_t is the comparable type.
So, basically, the "define our own type" is a way to guarantee that it's possible to change the underlying compiler architecture, without having to change (much of the) source code. All larger projects that do machine-dependant coding does this sort of thing.
What's the meaning of BSTR, LPCOLESTR, LPCWSTR, LPTSTR, LPCWCHAR, and many others if they're all just a bunch of defines that resolve to wchar_t anyway?
LPTSTR indicates the string buffer can be ANSI or UNICODE depending on the definition of the macro: UNICODE.
LPCOLESTR was invented by the OLE team because it switches its behaviour between char and wchar_t based on the definition of OLE2ANSI
LPCWSTR is a wchar_t string
BSTR is an LPOLESTR thats been allocated with SysAllocString.
LPCWCHAR is a pointer to a single constant wide character.
They're actually all rather different. Or at least, were at some time different. Ole was developed - and needed - wide strings while the windows API was still Win16 and didnt support wide strings natively at all.
Also, early versions of the Windows SDK didnt use wchar_t for WCHAR but unsigned short. The windows SDK on GCC gets interesting as - im led to belive that GCC 32bit has a 32bit wchar_t - on compilers with 32bit wchar_t, WCHAR would be defined as an unsigned short or some other type thats 16bits on that compiler.
LPTSTR and LPWSTR and similar defines are really just defines. BSTR and LPOLESTR have special meanings - they indicate the the string pointed to is allocated in a special way.
String pointed to by BSTR must be allocated with SysAllocString() family functions. String pointed to by LPOLESTR is usually to be allocated with CoTaskMemAlloc() (this should be looked up in the documentation to the COM call accepting/returning it).
If allocation/deallocation requirements for strings pointed to by BSTR and LPOLESTR are violated the program can run into undefined behaviour.
MSDN's page on Windows Data Types might provide clarification as to the differences between some of these data types.
LPCWSTR - Pointer to a constant null-terminated string of 16-bit
Unicode characters.
LPTSTR - An LPWSTR if UNICODE is defined, an LPSTR otherwise.
template <class T>
struct scalar_log_minimum {
public:
typedef T value_type;
typedef T result_type;
static
result_type initial_value(){
return std::log(std::numeric_limits<result_type>::max());
}
static
void update(result_type& t, const value_type& x){
if ( (x>0) && (std::log(x)<t) ) t = std::log(x);
}
};
i got the following error while trying to compile the above:
functional_ext.hpp:55:59: macro "max" requires 2 arguments, but only 1 given
max is not a macro, right? Then what is this error? BTW, I am using visual studio 2005
Also what is 55:59 --- 55 is the line number 59?
I find the many #defines that you encounter once you included windows.h very disturbing (not only max and min, but I also had problems with other generic words like Rectangle if I'm not mistaken). Therefore, I have developed the habit to include windows.h only when absolutely necessary, and never in header files. This reduces the pain to a small number of C++ files that are platform-specific.
Unfortunately some boost libraries (I believe thread and asio) do include windows.h in their headers, and I still run into this kind of silly problems from time to time.
My solution for the remainder of the situations where this causes problems is to #undef the problematic symbols after the inclusion of the header files.
You're including a header file somewhere that #defines max as a macro. The best solution would be to figure out where it's being defined, and inhibit it from being defined if possible. Alternatively, you could just #undef it:
#include <evil_header_which_defines_max.h>
#undef max
As others have noted, including windows.h is probably your problem. Microsoft provides a means to "turn off" parts of windows.h with preprocessor symbols. You can define these symbols as part of your build or directly in code.
Using preprocessor symbols to conditionally skip sections of windows.h may or may not be considered elegant but in the general case it is an easier, more general and more scalable solution than #undef.
Here's how to skip defining min or max as macros:
#define NOMINMAX
#include <windows.h>
Note that many include files will, at some point, include windows.h. In such cases setting up your defines at a more global level may be more convenient.
If you search through windows.h, you can find a bunch of other preprocessor symbols (e.g., NOOPENFILE, NOKANJI, NOKERNEL and many others) that can often be useful.
It's a macro called max that gets into the way as Adam explained. Another solution (more a "hotfix") may be to put parentheses around the function, to prevent it from being seen as a macro invocation:
return std::log((std::numeric_limits<result_type>::max)());