I am using Alien for Lua to reference the WaitForSingleObject function in the Windows Kernel32.dll.
I am pretty new to Windows programming, so the question I have is about the following #defined variables referenced by the WaitForSingleObject documentation:
If dwMilliseconds is INFINITE, the function will return only when the object is signaled.
What is the INFINITE value? I would naturally assume it to be -1, but I cannot find this to be documented anywhere.
Also, with the following table, it mentions the return values in hexadecimal, but I am confused as to why they have an L character after the last digit. Could this be something as simple as casting it to a Long?
The reason I ask is because Lua uses a Number data type, so I am not sure if I should be checking for this return value via Hex digits (0-F) or decimal digits (0-9)?
The thought crossed my mind to just open a C++ application and print out these values, so I did just that:
#include <windows.h>
#include <process.h>
#include <iostream>
int main()
{
std::cout << INFINITE;
std::cout << WAIT_OBJECT_0;
std::cout << WAIT_ABANDONED;
std::cout << WAIT_TIMEOUT;
std::cout << WAIT_FAILED;
system("pause");
return 0;
}
The final Lua results based off my findings is:
local INFINITE = 4294967295
local WAIT_OBJECT_0 = 0
local WAIT_ABANDONED = 128
local WAIT_TIMEOUT = 258
local WAIT_FAILED = 4294967295
I tried to Google for the same information. Eventually, I found this Q&A.
I found two sources with: #define INFINITE 0xFFFFFFFF
https://github.com/tpn/winsdk-10/blob/master/Include/10.0.10240.0/um/WinBase.h#L704
https://github.com/Alexpux/mingw-w64/blob/master/mingw-w64-tools/widl/include/winbase.h#L365
For function WaitForSingleObject, parameter dwMilliseconds has type DWORD.
From here: https://learn.microsoft.com/en-us/windows/win32/winprog/windows-data-types
I can see: DWORD A 32-bit unsigned integer.
Thus, #RemyLebeau's comment above looks reasonable & valid:
`4294967295` is the same as `-1` when interpreted as a signed integer type instead.`
In short: ((DWORD) -1) == INFINITE
Last comment: Ironically, this "infinite" feels similar to the Boeing 787 problem where they needed to reboot/restart the plane once per 51 days. Feels eerily close / similar!
Related
I'm trying to understand what should be the correct behavior of C++11
std::get_time() when the input data is "shorter" than expected by the format
string. For example, what the following program should print:
#include <ctime>
#include <iomanip>
#include <sstream>
#include <iostream>
int main (int argc, char* argv[])
{
using namespace std;
tm t {};
istringstream is ("2016");
is >> get_time (&t, "%Y %d");
cout << "eof: " << is.eof () << endl
<< "fail: " << is.fail () << endl;
}
Note that get_time()
behavior is described in terms of
std::time_get<CharT,InputIt>::get().
Based on the latter (see 1c paragraph) I would expect both eofbit and failbit
to be set and so the program to print:
eof: 1
fail: 1
However, for all the major Standard C++ Library implementations (libstdc++
10.2.1, libc++ 11.0.0, and MSVC 16.8) it prints:
eof: 1
fail: 0
Interestingly, that for MSVC before 16.8 it prints:
eof: 1
fail: 1
But the commit "std::get_time should not fail when format is longer than the stream"
suggests that this was fixed deliberately.
Could someone clarify if (and why) the mentioned standard libraries behave correctly and, if that's the case, how it is supposed to detect that the format string was not fully used.
I cannot explain with 100% accuracy, but I can try to explain why the function behaves as observed.
I suspect that the eofbit is not set in your case, which is 1c, because case 1b takes precedence:
b) There was a parsing error (err != std::ios_base::goodbit)
According to The eofbit part of https://en.cppreference.com/w/cpp/io/ios_base/iostate , one of the situations when the eof bit is set is when
The std::get_time I/O manipulator and any of the std::time_get parsing functions: time_get::get, time_get::get_time, time_get::get_date etc., if the end of the stream is reached before the last character needed to parse the expected date/time value was processed.
The same source for the failbit says:
The time input manipulator std::get_time (technically, time_get::get it calls), if the input cannot be unambiguously parsed as a time value according to the given format string.
So my guess is that when the input is 2000, get tries to read it in using operator>>(std::string&), hits the eof condition and sets the eofbit. This satisfies condition 1b, so condition 1c cannot be applied.
If the function expects a year and the input is shorter than 4 digit, e.g. 200, or of it contains a space after the year, 2000 , or contains more than 4 digits, 20001, the function returns failbit. However, if the input is a 4-digit number starting with 0's, e.g. 0005, the function returns eofbit == 1, failbit == 0. This is in accordance with the specification of %Y format specifier:
parses full year as a 4 digit decimal number, leading zeroes permitted but not required
So I hope this explains why sometimes condition 1c is not taken into account. We can detect that the format string has not been fully used in a usual way, by testing the good() member function. I believe telling the difference between the function returning failbit == 1 or 0 is of very little practical importance. I also believe the standard is imprecise here, but if we assume that the user is interested in the value of good(), this lack of precision is of no practical relevance.
It is also possible that the value of failbit in the case you consider is implementation-defined: an implementation could try and read exactly 4 characters to satisfy the %Y format specifier, in which case the eofbit would not be set. But this is only my guess.
EDIT
Look at this modification of your program:
int main (int argc, char* argv[])
{
using namespace std;
tm t {};
istringstream is ("2016");
// is >> get_time (&t, "%Y %d");
std::string s;
is >> s;
cout << "eof: " << is.eof () << endl
<< "fail: " << is.fail () << endl;
}
I replaced get_time with std::string, but the behavior did not change! The string has been read in to its end, so the stream state cannot be set to fail; however, it hit the end-of-file, so the eofbit has been set!
eof: 1
fail: 0
What I'm saying is that a similar phenomenon can take place inside get_time and then the stream's state is propagated up to the result of get_time.
Ok, it seems that all the mentioned implementations behave according to the
C++11 standard.
Here is my understanding of what happens in the above program.
std::get_time() does all the preparations and calls std::time_get<CharT,InputIt>::get().
Since the first format string character is '%', the get() function calls
do_get() at the first iteration of the parsing loop.
do_get() reads "2016" while processing the %Y specifier and fills the
respective field in the time object. Besides that, it sets eofbit according to
the standard, since "the end of the input stream is reached after reading a
character". This makes get() function to bail out from the loop after the
do_get() call due to 1b condition (see get() for details), with only eofbit set for the stream. Note
that the format part that follows %Y is fully ignored.
But if we, for example, change the input stream from "2016" to "2016 " (append the space character), then do_get() doesn't set eofbit, get() reads/matches the spaces in the stream and format after the do_get() call, and then bails out due to 1c condition with both eofbit and failbit set.
Generally reading with std::get_time() seems to succeed (failbit is not set)
when either format string is fully matched against the stream (which may still
have some data in it) or if the end of the stream is reached after a
conversion specifier was successfully applied (with the rest of the format
string ignored).
When a function takes an rvalue reference which it doesn't use in some branches, what should it do with the rvalue to maintain the semantic correctness of it's signature and to be consistent about the ownership of the data. Considering the sample code below:
#include <memory>
#include <list>
#include <iostream>
struct Packet {};
std::list<std::unique_ptr<Packet>> queue;
void EnQueue(bool condition, std::unique_ptr<Packet> &&pkt) {
if (condition) queue.push_back(std::move(pkt));
else /* How to consume the pkt? */;
}
int main()
{
std::unique_ptr<Packet> upkt1(new Packet());
std::unique_ptr<Packet> upkt2(new Packet());
EnQueue(true, std::move(upkt1));
EnQueue(false, std::move(upkt2));
std::cout << "raw ptr1: " << upkt1.get() << std::endl;
std::cout << "raw ptr2: " << upkt2.get() << std::endl;
return 0;
}
The signature of the Enqueue function indicates that it will take ownership of the data passed to it but this is only true if it hits the if path, instead if it hits the else path the function effectively doesn't use the rvalue and the ownership is returned back to the caller, which is illustrated by the fact that upkt2.get is not NULL after returning from the function. The net effect is that the behaviour of EnQueue is inconsistent with it's signature.
The question now is - whether this is an acceptable behaviour or should the EnQueue function be changed to be consistent, if so how?
I see three ways of dealing with this.
Document that after the function exits, "pkt is left in a valid, but unspecified state." That way, the caller cannot assume anything about the parameter afterwards; if they need it cleared no matter what, they can do it explicitly. If they don't, they will not pay for any internal cleanup they would not use.
If you want to make the signature 100% clear on accepting ownership, just take pkt by value instead of rvalue reference (as suggested by #Quentin in comments).
Construct a temporary from pkt:
if (condition) queue.push_back(std::move(pkt));
else auto sink(std::move(pkt));
I recently came across the following code that uses syntax I have never seen before:
std::cout << char('A'+i);
The behavior of the code is obvious enough: it is simply printing a character to stdout whose value is given by the position of 'A' in the ASCII table plus the value of the counter i, which is of type unsigned int.
For example, if i = 5, the line above would print the character 'F'.
I have never seen char used as a function before. My questions are:
Is this functionality specific to C++ or did it already exist in strict C?
Is there a technical name for using the char() keyword as a function?
That is C++ cast syntax. The following are equivalent:
std::cout << (char)('A' + i); // C-style cast: (T)e
std::cout << char('A' + i); // C++ function-style cast: T(e); also, static_cast<T>(e)
Stroustroup's The C++ programming language (3rd edition, p. 131) calls the first type C-style cast, and the second type function-style cast. In C++, it is equivalent to the static_cast<T>(e) notation. Function-style casts were not available in C.
This is not a function call, it's instead a typecast. More usually it's written as
std::cout << (char)('A'+i);
That makes it clear it's not a function call, but your version does the same. Note that your version might only be valid in C++, while the one above work in both C and C++. In C++ you can also be more explicit and write
std::cout << static_cast<char>('A'+i);
instead.
Not that the cast is necessary because 'A'+i will have type int and be printed as an integer. If you want it to be interpreted as a character code you need the char cast.
I can't find any clear indication of if/when the 64-bit value returned by QueryPerformanceCounter() gets reset, or overflows and resets back to zero. Hopefully it never overflows because the 64 bits gives space for decades worth of counting at gigahertz rates. However... is there anything other than a computer restart that will reset it?
Empirically, QPC is reset at system startup.
Note that you should not depend on this behavior, since Microsoft do not explicitly state what the "zero point" is for QPC, merely that it is a monotonically increasing value (mod 2^64) that can be used for high precision timing.
Hence they are quite within their rights to modify it's behavior at any time. They could, for example, make it return values that match FILETIME values as would be produced by a call to GetSystemTimeAsFileTime(), with the same resolution, 100ns tick rate. Under these circumstances, it would never reset. At least not in your or my lifetimes.
That said, the following program when run on Windows 10 [Version 6.3.16299] produces pairs of identical values that are the system uptime in seconds.
#include <windows.h>
#include <iostream>
int main()
{
LARGE_INTEGER performanceCount;
LARGE_INTEGER performanceFrequency;
QueryPerformanceFrequency(&performanceFrequency);
for (;;)
{
QueryPerformanceCounter(&performanceCount);
DWORD const systemTicks = timeGetTime();
DWORD const systemSeconds = systemTicks / 1000;
__int64 const performanceSeconds = performanceCount.QuadPart / performanceFrequency.QuadPart;
std::cout << systemSeconds << " " << performanceSeconds << std::endl;
Sleep(1000);
}
return 0;
}
Standard disclaimers apply, your actual mileage may vary, etc. etc. etc.
It seems that some Windows running inside VirtualBox may reset QueryPerformanceCounter every 20 minutes or so: see here.
QPC is more reliable as time goes by, but for better portability a low precision timer should be used such as GetTickCount64.
How can one convert a shared_ptr that points to a const object to a shared_ptr that points to a non-const object.
I am trying to do the following :
boost::shared_ptr<const A> Ckk(new A(4));
boost::shared_ptr<A> kk=const_cast< boost::shared_ptr<A> > Ckk;
But it does not work.
'boost::const_pointer_cast' will do what you're asking for, but the obligatory second half of the answer is that you probably shouldn't use it. 99% of the time when it seems like you need to cast away the const property of a variable, it means that you have a design flaw. Const is sometimes more than just window dressing and casting it away may lead to unexpected bugs.
Without knowing more details of your situation one can't say for certain. But no discussion of const-cast is complete without mentioning this fact.
use boost::const_pointer_cast, documentation.
the proper way should be this
boost::shared_ptr<A> kk (boost::const_pointer_cast<A>(Ckk));
std::const_cast_pointer makes a second managed pointer. After the cast you have a writable pointer and the original const-pointer. The pointee remains the same. The reference count has been increased by 1.
Note that const_cast is a builtin keyword, but const_pointer_cast is a template function in namespace std.
The writable pointer can then be used to change the value from under the shared_ptr<const T>. IMHO the writable pointer should only persist temporarily on the stack; otherwise there must be a design flaw.
I once wrote a small test program to make this clear to myself which I adapted for this thread:
#include <memory>
#include <iostream>
#include <cassert>
using namespace std;
typedef shared_ptr<int> int_ptr;
typedef shared_ptr<const int> const_int_ptr;
int main(void)
{
const_int_ptr Ckk(new int(1));
assert(Ckk.use_count() == 1);
cout << "Ckk = " << *Ckk << endl;
int_ptr kk = const_pointer_cast<int>(Ckk); // obtain a 2nd reference
*kk = 2; // change value under the const pointer
assert(Ckk.use_count() == 2);
cout << "Ckk = " << *Ckk << endl; // prints 3
}
Under UNIX or Windows/Cygwin, compile with
g++ -std=c++0x -lm const_pointer_cast.cpp