I am writing my own version of DebugView using this article: https://www.codeproject.com/Articles/23776/Mechanism-of-OutputDebugString as a starting point.
The code appears to work fine. However I do not understand the use of the named mutex "DBWinMutex".
This mutex is opened at the beginning of the code:
CComBSTR DBWinMutex = L"DBWinMutex";
HANDLE m_hDBWinMutex = ::OpenMutex(MUTEX_ALL_ACCESS,
FALSE,
DBWinMutex);
and not closed before the end of the program!?
I find this strange. I would think that the mutex would have to be locked and unlocked repeatedly so that OutputDebugString could write to the shared memory "DBWIN_BUFFER"?
However I am able to read OutputDebugString messages written by other programs so the mutex does not appear to lock "DBWIN_BUFFER" for writing.
Also I can also run DebugView in parallell with my DebugView implementation and they both can read OutputDebugString messages. So it seems the mutex does not grant exclusive read to "DBWIN_BUFFER" neither.
Using the MUTEX_ALL_ACCESS access as above means I have to run the program as administrator.
When I replace this with SYNCHRONIZE access the program appears to function exactly the same except that I do not have to run it as administrator.
Is this OK or may it cause some subtle bug?
Also I test the return from OpenMutex above and if it is null call CreateMutex.
As described in the article you linked to, DBWinMutex is used only by OutputDebugString() itself, to prevent multiple threads from writing to the output buffer at the same time. It is not necessary for a debug monitor to use DBWinMutex at all:
However, there is a mistake in the above image. It should look more like this instead:
Related
I am working on a tool which writes data to files.
At some point, a file might be "locked" and is not writable until other handles have been closed.
I could use the CreateFile API in a loop until the file is available for writing access.
But I have 2 concerns using CreateFile in a loop:
The Harddrive (cache) is always running...?!
I need to call CreateFile again to obtain a valid writing handle with different flags...?!
So my question is:
What is the best solution to wait for a file to be writable and instantly get a valid handle?
Are there any event solutions or anything, which allows to "queue/reserve" for a handle once, so that there is no "uncontrolled" race condition with others?
A file can be "locked" for two reasons:
An actual file lock which prevents writing to, and possibly reading from the file.
The file being opened without sharing access (accidentially or voluntarily) which even prevents you from opening a handle. If you already see CreateFile failing, that's likely the case rather than a real lock.
There are conceptually[1] at least two ways of knowing that no other process has locked a file without busy waiting:
By finding out who holds locks and waiting on the process or thread to exit (or, by outright killing them...)
By locking the file yourself
Who holds locks?
Finding out about lock owners is rather nasty, you can do it via the totally undocumented SystemLocksInformation class used with the undocumented NtQuerySystemInformation function (the latter is "only undocumented", but the former is so much undocumented that it's really hard to find any information at all). The returned structure is explained here, and it contains an owning thread id.
Luckily, holding a lock presumes holding a handle. Closing the file handle will unlock all file ranges. Which means: No lock without handle.
In other words, the problem can also be expressed as "who is holding an open handle to the file?". Of course not all processes that hold a handle to a file will have the file locked, but no process having a handle guarantees that no process has the file locked.
Code for finding out which processes have a file open is much easier (using restart manager) and is readily available at Raymond Chen's site.
Now that you know which processes and threads are holding file handles and locks, make a list of all thread/process handles and use WaitForMultipleObjects on the list of process handles. When a process exits, all handles are closed.
This also transparently deals with the possibility of a "lock" because a process does not share access.
Locking the file yourself
You can use LockFileEx, which operates asynchronously. Note that LockFileEx needs a valid handle that has been opened with either read or write permissions (getting write permission may not be possible, but read should work almost always -- even if you are prevented from actually reading by an exclusive lock, it's still possible to create a handle that could read if there was no lock).
You can then wait on the asynchronous locking to complete either via the event in the OVERLAPPED structure, or on a completion port, and can even do other useful stuff in the mean time, too. Once you have locked the file, you know that nobody else has it locked.
[1] The wording "conceptually" suggests that I am pretty sure either method will work, but I have not tested them.
Apart from a busy loop, repeatedly trying to open the file with write access (which doesn't smell right - what if the file is locked by a process that is stuck and requires a reboot or manual termination, you'll never be able to write to it.
You could write to a temporary file and rename it afterwards (you can tell the OS a file rename operation is required and it will do it at next boot). If you need to append instead of write, then you'll have to write a process to append your temporary file to the correct one, possibly at startup (write the instructions of which file to append to where to a file that your process reads).
If you need to modify a locked file, then you'll just have to take a lock on it as soon as you can, and refuse to start the program if you don't have write access - warn the user right at the start.
There is a possibility that you can wait in a better way: if a file is locked for writing, you can assume that someone is going to write to it, and so use FindFirstChangeNotification to receive events for the FILE_NOTIFY_CHANGE_LAST_WRITE or FILE_NOTIFY_CHANGE_ATTRIBUTES events. Its not perfect in that someone could request exclusive access for reading too.
I suppose you could try to get the handle to the file that is locked and wait on that, so when it is released your WaitForSingleObject will return. However, there's a good chance you will not be allowed to get the handle owned by a different process (by the security subsystem)
Specifically, if the following events take place in the given order:
Process 1 opens a file in append mode.
Process 2 opens the same file in append mode.
Process 2 gets an exclusive lock using flock(2) on the file descriptor.
Process 1 attempts to write to the file.
What happens?
Will the write return immediately with a code indicating failure? Will it hang until the lock is released, then write and return success? Does the behavior vary by kernel? It seems odd that the documentation doesn't cover this case.
(I could write a couple processes to test it on my system, but I don't know whether my test would be representative of the general case, and if anyone does know, I can anticipate this answer saving a lot of other people a lot of time.)
The write proceeds as normal. flock provides advisory locking. Locking a file exclusively only prevents others from getting a shared or exclusive lock on the same file. Calls other than flock are not affected.
I'm programming a Windows console application in plain C and using PeekConsoleInput/ReadConsoleInput to get keystrokes from the user and process them.
I need to get the current state of the Caps Lock, Scroll Lock, and Num Lock keys when the program starts, before the user has entered anything. Meaning there would be no KEY_EVENTs in the message queue to process.
Is this possible to do? If so, how? I've looked at most of the functions in wincon.h and nothing seems appropriate.
You can call GetAsyncKeyState three times, and it will usually work, but there are a few cases where it still won't work for you. The arguments for your three calls would be VK_CAPITAL, VK_SCROLL, and VK_NUMLOCK.
I would like the command gdb on program X to instead switch to an existing debugging session of X if it already exists instead of signalling an error "This program is already being debugged" in gud-common-init.
I believe this is important as it makes the behaviour of gdb harmonize with the standard behaviour of most other Emacs interactions such as, find-file, switch-to-buffer etc, thus creating less confusion to the user.
So far I have modified the line containing
(error "This program is already being debugged"))
to instead do
(message "This program is already being debugged")
to at least prevent the error from arising. However, the function gdb does some extra initializations that should not be needed that causes some unnecessary delays. Is this a todo item or have I missed some gud/gdb-function that does this already?
Many thanks in advance,
Per Nordlöw
You can always rename-buffer. This is how I can run multiple gdb sessions on the same executable. It is not automatic but it is an effective work around.
For example if my executable is called pump, then upon running gdb, a buffer named *gud-pump* will be generated which represents the gdb session. From this buffer do meta-x rename-buffer *gud-pump1*
Then invoke gdb again and you will have two GUD sessions, one *gud-pump* and *gud-pump1*. The sessions are separate and should not interfere (although they can interact) with each other.
My question is related to "Turn off buffering in pipe" albeit concerning Windows rather than Unix.
I'm writing a Make clone and to stop parallel processes from thrashing each others' console output I've redirected the output to pipes (as described in here) on which I can do any filtering I want. Unfortunately long-running processes now buffer up their output rather than sending it in real-time as they would on a console.
From peeking at the MSVCRT sources it seems the root cause is that GetFileType() is used to check whether the standard I/O handles are attached to a console, which then sets an internal flag and ends up disabling buffering.
Apparently a separate array of inheritable file handles and flags can also be passed on through the undocumented lpReserved2 member of the STARTUPINFO structured when creating the process. About the only working solution I've figured out is to use this list and just lie about the device type when setting the flags for stdout/stderr.
Now then... Is there any sane way of solving this problem?
There is not. Yes, GetFileType() tells it that stdout is no longer a char device, _isatty() return false so the CRT switches the output stream to buffered mode. Important to get reasonable throughput. Flushing output one character at a time is only acceptable when a human is looking at them.
You would have to relink the programs you are trying to redirect with a customized version of the CRT. I don't doubt that if that was possible, you wouldn't be messing with this in the first place. Patching GetFileType() is another un-sane solution.