I have been working on a Windows GUI application which means by default it does not get the standard input/output/error handles. This GUI app is also a child process to another GUI app (rendering into an embedded window and the standard handles are used for unified logging). We have an assertion in existing code that is reused for the child process, the condition of which is that _fileno(stderr) == 2. This assert had of course failed when there was no standard handles at all, returning -2 just as the Microsoft documentation said it would. Needing these standard handles I called the AllocConsole() function to generate them before getting into this code. However with AllocConsole() the assert still fails because _fileno(stderr) is returning 4. Also _fileno(stdout) looks to be returning 3.
My understanding was that file descriptors are numbered, in the order they are created, starting from 0. At least that is the POSIX documentation for the same function, rather than for WIN32. At the very least it seems that the WIN32 functions are not POSIX compliant. This itself would not be surprising. But I also don't see anything in the Microsoft documentation to explain this deviation of behaviour. So if there is initially no standard handles why would generating them not give them the expected numbering, 0 (stdin) 1 (stdout) 2 (stderr)? And if the numbering of new files starts after the standard ones, should we not expect _fileno(stderr) == 5 rather than 4, since _fileno(stdin) would be 3 instead of _fileno(stdout) == 3?
FYI, I am not looking for a work around here. I have already swapped wWinMain with wmain and changed the project for the child process to be a console application. This way everything works as expected. I'm just looking for an explanation why the file numbering appears so wrong.
So if there is initially no standard handles why would generating them
not give them the expected numbering, 0 (stdin) 1 (stdout) 2 (stderr)?
According to AllocConsole:
AllocConsole initializes standard input, standard output, and
standard error handles for the new console. The standard input handle
is a handle to the console's input buffer, and the standard output and
standard error handles are handles to the console's screen buffer. To
retrieve these handles, use the GetStdHandle function.
So there are standard handles after AllocConsole. The reason _fileno returns -2 is just "stdout or stderr is not associated with an output stream".
And if the numbering of new files starts after the standard ones,
should we not expect _fileno(stderr) == 5 rather than 4, since
_fileno(stdin) would be 3 instead of _fileno(stdout) == 3?
freopen will generate a new fd, that is, starting from 3, the reason for _fileno(stderr) = 4 may be that you did not reopen stdin. With all 3 to reopen:
if (AllocConsole()) {
freopen("CONIN$", "r", stdin);
freopen("CONOUT$", "w", stdout);
freopen("CONOUT$", "w", stderr);
}
I will get 3, 4, 5 as expected.
It is recommended to use the handle returned in GetStdHandle directly instead of file descriptors.
Related
I was trying to implement a unified input interface using Windows API function ReadFile for my application, which should be able to handle both console input and redirection. It didn't work as expected with console input containing multibyte (like CJK) characters.
According to Microsoft Documentation, for console input handles, ReadFile just behaves like ReadConsoleA. (FYI, results are encoded in console's current code page, so A family console functions are acceptable. And there's no ReadFileW as ReadFile works on bytes.) The third and fourth arguments in ReadFile is nNumberOfBytesToRead and lpNumberOfBytesRead respectively, but they are nNumberOfCharsToRead and lpNumberOfCharsRead in ReadConsole. To find out the exact mechanism, I did the following test:
BYTE buf[8];
DWORD len;
BOOL f = ReadFile(in, buf, 4, &len, NULL);
if (f) {
// Print buf, len
ReadConsoleW(in, buf, 4, &len, NULL); // check count of remaining characters
// Print len
}
For input like 字, len is set to 4 first (character plus CRLF), indicating the arguments are counting bytes.
For 文字 or a字, len keeps 4 and only the first 4 bytes of buf are used at first, but the second read does not get the CRLF. Only when more than 3 characters are input will the second read get unread LF, then CR. It means that ReadFile is actually consuming up to 4 logical characters, and discarding the part of input after the first 4 bytes.
The behavior of ReadConsoleA is identical to ReadFile.
Obviously, this is more likely to be a bug than design. I did some searches and found a related feedback dating back to 2009. It seems that ReadConsoleA and ReadFile did read data fully from console input, but as it was inconsistent with ReadFile specifications and could cause severe buffer overflow that threatened system processes, Microsoft did a makeshift repair, by simply discarding excess bytes, ignoring support for multibyte charsets. (This is an issue about the behavior after that, limiting buffer to 1 byte.)
Currently the only practical solution I have come up with to make input correct is to check whether the input handle is a console, and process it differently using ReadConsoleW if so, which adds complexity to the implementation. Are there other ways to get it correct?
Maybe I could still keep ReadFile, by providing a buffer large enough to hold any input at one time. However, I don't have any ideas on how to check or set the input buffer size. (I can only enter 256 characters (254 plus CRLF) in my application on my computer, but cmd.exe allows to enter 8,192 characters, so this is really a problem.) It will also be helpful if more information about this can be provided.
Ps.: Maybe _getws could also help, but this question is about Windows API, and my application needs to use some low-level console functions.
i have a requirement where many threads will call same shell script to perform a work, and then will write output(data as single text line) to a common text file.
as here many threads will try to write data to same file, my question is whether unix provides a default locking mechanism so that all can not write at the same time.
Performing a short single write to a file opened for append is mostly atomic; you can get away with it most of the time (depending on your filesystem), but if you want to be guaranteed that your writes won't interrupt each other, or to write arbitrarily long strings, or to be able to perform multiple writes, or to perform a block of writes and be assured that their contents will be next to each other in the resulting file, then you'll want to lock.
While not part of POSIX (unlike the C library call for which it's named), the flock tool provides the ability to perform advisory locking ("advisory" -- as opposed to "mandatory" -- meaning that other potential writers need to voluntarily participate):
(
flock -x 99 || exit # lock the file descriptor
echo "content" >&99 # write content to that locked FD
) 99>>/path/to/shared-file
The use of file descriptor #99 is completely arbitrary -- any unused FD number can be chosen. Similarly, one can safely put the lock on a different file than the one to which content is written while the lock is held.
The advantage of this approach over several conventional mechanisms (such as using exclusive creation of a file or directory) is automatic unlock: If the subshell holding the file descriptor on which the lock is held exits for any reason, including a power failure or unexpected reboot, the lock will be automatically released.
my question is whether unix provides a default locking mechanism so
that all can not write at the same time.
In general, no. At least not something that's guaranteed to work. But there are other ways to solve your problem, such as lockfile, if you have it available:
Examples
Suppose you want to make sure that access to the file "important" is
serialised, i.e., no more than one program or shell script should be
allowed to access it. For simplicity's sake, let's suppose that it is
a shell script. In this case you could solve it like this:
...
lockfile important.lock
...
access_"important"_to_your_hearts_content
...
rm -f important.lock
...
Now if all the scripts that access "important" follow this guideline,
you will be assured that at most one script will be executing between
the 'lockfile' and the 'rm' commands.
But, there's actually a better way, if you can use C or C++: Use the low-level open call to open the file in append mode, and call write() to write your data. With no locking necessary. Per the write() man page:
If the O_APPEND flag of the file status flags is set, the file offset
shall be set to the end of the file prior to each write and no
intervening file modification operation shall occur between changing
the file offset and the write operation.
Like this:
// process-wide global file descriptor
int outputFD = open( fileName, O_WRONLY | O_APPEND, 0600 );
.
.
.
// write a string to the file
ssize_t writeToFile( const char *data )
{
return( write( outputFD, data, strlen( data ) );
}
In practice, you can write anything to the file - it doesn't have to be a NUL-terminated character string.
That's supposed to be atomic on writes up to PIPE_BUF bytes, which is usually something like 512, 4096, or 5120. Some Linux filesystems apparently don't implement that properly, so you may in practice be limited to about 1K on those file systems.
Who can tell how to use wake_up() in gwan?
// tell G-WAN when to run a script again (for the same request)
// type: WK_MS | WK_FD
#define WK_MS 1 // milliseconds
#define WK_FD 2 // file descriptor
void wake_up(char *argv[], int delay_or_fd, int type);
Is it used to replace sleep()?
Look at the examples using these functions - be careful though, the last time I tested them, they didn't work (this has probably been fixed already or might have been a usage error on my part, but nevertheless if you're going to use them, try the examples first and see if they work).
In a nutshell:
with WK_MS this behaves close to the sleep function, with the difference, that your function is called again after the time elapsed (as opposed to continuing where you called it), and execution is continued after the wake_up call. So it's more like "execute me again after X ms".
with WK_FD your script should be called again as soon as there is new data on the provided file descriptor (useful for e.g. tailing a self built log mechanism or theoretically for realtime communications like websockets, but I never got CLIENT_SOCKET working with this, so be careful to check whatever you pass if it's really a file descriptor beforehand)
I'm building a Win32 GUI app. Inside that app, I'm using a DLL that was intended to be used in a command line app.
Suppose Foo.exe is my GUI app, and bar() is a function in the DLL that prints "hello" to stdout. Foo.exe calls bar().
If I run Foo.exe from the command line with a redirect (>) (i.e. Foo.exe > out.txt) it writes "hello" to out.txt and exits normally (as expected).
However, if I run Foo.exe without a redirect (either from cmd.exe or by double-clicking in Windows Explorer), it crashes when bar() is called.
If I run Foo.exe inside the debugger with the redirect in the command line (set through VS's properties for the project) and call "GetStdHandle(STD_OUTPUT_HANDLE)", I get a reasonable address for a handle. If I call it without the redirect in the command line, I get 0.
Do I need something to "initialize" standard out? Is there a way that I can set up this redirect in the application startup? (Redirecting to a file would be ideal. But just throwing out the data printed by the DLL would be okay, too.)
Finally, I suspect that the DLL is writing to stdout through the CRT POSIX-like API, because it is a cross-platform DLL. I don't know if this matters.
I've tried creating a file with CreateFile and calling SetStdHandle, but that doesn't seem to work. I may be creating the file incorrectly, however. See code below.
HANDLE hStdOut = GetStdHandle(STD_OUTPUT_HANDLE);
// hStdOut is zero
HANDLE hFile;
hFile = CreateFile(TEXT("something.txt"), // name of the write
GENERIC_WRITE, // open for writing
0, // do not share
NULL, // default security
CREATE_NEW, // create new file only
FILE_ATTRIBUTE_NORMAL, // normal file
NULL); // no attr. template
BOOL r = SetStdHandle(STD_OUTPUT_HANDLE, hFile) ;
hStdOut = GetStdHandle(STD_OUTPUT_HANDLE);
// hStdOut is now equal to hFile, and r is 1
bar();
// crashes if there isn't a redirect in the program arguments
UPDATE: I just found this article: http://support.microsoft.com/kb/105305. It states "Note that this code does not correct problems with handles 0, 1, and 2. In fact, due to other complications, it is not possible to correct this, and therefore it is necessary to use stream I/O instead of low-level I/O."
My DLL definitely uses file handles 0,1 and 2. So, there may be no good solution to this problem.
I'm working on a solution that checks for this case, and re-launches the exe appropriately using CreateProcess. I'll post here when I'm done.
The solution that I've found is the following:
obtain a valid File HANDLE in some way to direct the standard output.
Lets call the file handle "fh".
(please note that on Windows a File HANDLE is not the same thing of a file descriptor)
associate a file descriptor to the file handle with _open_osfhandle
(see http://msdn.microsoft.com/en-us/library/kdfaxaay.aspx for details)
Lets call "fd" the new file descriptor, an int value.
call dup2 to associate the STDOUT_FILENO to the given file descriptor:
dup2(fd, STDOUT_FILENO)
create a file strem associated to the stdout file descriptor
FILE* f = _fdopen(STDOUT_FILENO, "w");
memset stdout to the content of f:
*stdout = *f
call SetStdHandle on the given file handle:
SetStdHandle(STD_OUTPUT_HANDLE, ofh);
Please note that I've not tested exactly this sequence but something slightly different.
I don't know if some steps are redundant.
In any case the following article explain very well the concept of file handle, descriptor et fiel stream:
http://dslweb.nwnexus.com/~ast/dload/guicon.htm
You must build foo.exe as a console application with the /SUBSYSTEM switch. Windows will allocate a console (stdout) for your application automatically, which can be :
The current console
A redirection to a file
A pipe to another program's STDIN
If you build foo.exe as a GUI application, the console is not allocated by default, wich explains the crash
If you must use the GUI subsystem, it can still be done with AllocConsole. This old WDJ article has sample code to help you.
Can you tell me which library do you use? This problem have good solution. Write small stub launcher EXE (in GUI mode but with NO windows!) that have your icon and that all shortcuts launch. Make this stub EXE "CreateProcess" the real EXE with redirected output to "NUL" or "CON", or, CreateProcess() it suspended, take its STDOUT, doing nothing with it. This way, your original EXE should work without visible console, but will actually have where to write - in the handles 0,1 and 2 that are taken by the parent invisible stub EXE. Note that killing the parent EXE may make the child lost its handles - and - crash.
You may end up with two processes in Task Manager. So you can try making these 2 processes a job like Google Chrome does.
On your question Do I need something to "initialize" standard out? - only your parent / launcher can pre-initialize your STDOUT for handles 0,1 and 2 "properly".
Is it possible to somehow change standart I/O functions handle on Windows? Language preffered is C++. If I understand it right, by selecting console project, compiler just pre-allocate console for you, and operates all standart I/O functions to work with its handle. So, what I want to do is to let one Console app actually write into another app Console buffer. I though that I could get first´s Console handle, than pass it to second app by a file (I don´t know much about interprocess comunication, and this seems easy) and than somehow use for example prinf with the first app handle. Can this be done? I know how to get console handle, but I have no idea how to redirect printf to that handle. Its just study-purpose project to more understand of OS work behind this. I am interested in how printf knows what Console it is assiciated with.
If I understand you correctly, it sounds like you want the Windows API function AttachConsole(pid), which attaches the current process to the console owned by the process whose PID is pid.
If I understand you correct you can find the source code of application which you want to write in http://msdn.microsoft.com/en-us/library/ms682499%28VS.85%29.aspx. This example show how to write in stdin of another application and read it's stdout.
For general understanding. Compiler don't "pre-allocate console for you". Compiler use standard C/C++ libraries which write in the output. So if you use for example printf() the following code will be executed at the end will look like:
void Output (PCWSTR pszwText, UINT uTextLenght) // uTextLenght is Lenght in charakters
{
DWORD n;
UINT uCodePage = GetOEMCP(); // CP_OEMCP, CP_THREAD_ACP, CP_ACP
PSTR pszText = _alloca (uTextLenght);
// in the console are typically not used UNICODE, so
if (WideCharToMultiByte (uCodePage, 0, pszwText, uTextLenght,
pszText, uTextLenght, NULL, NULL) != (int)uTextLenght)
return;
WriteFile (GetStdHandle (STD_OUTPUT_HANDLE), pszText, uTextLenght, &n, NULL);
//_tprintf (TEXT("%.*ls"), uTextLenght, pszText);
//_puttchar();
//fwrite (pszText, sizeof(TCHAR), uTextLenght, stdout);
//_write (
}
So if one changes the value of STD_OUTPUT_HANDLE all output will be go to a file/pipe and so on. If instead of WriteFile the program use WriteConsole function such redirection will not works, but standard C/C++ library don't do this.
If you want redirect of stdout not from the child process but from the current process you can call SetStdHandle() directly (see http://msdn.microsoft.com/en-us/library/ms686244%28VS.85%29.aspx).
The "allocating of console" do a loader of operation system. It looks the word of binary EXE file (in the Subsystem part of IMAGE_OPTIONAL_HEADER see http://msdn.microsoft.com/en-us/library/ms680339%28VS.85%29.aspx) and if the EXE has 3 on this place (IMAGE_SUBSYSTEM_WINDOWS_CUI), than it use console of the parent process or create a new one. One can change a little this behavior in parameters of CreateProcess call (but only if you start child process in your code). This Subsystem flag of the EXE you define with respect of linker switch /subsystem (see http://msdn.microsoft.com/en-us/library/fcc1zstk%28VS.80%29.aspx).
If you want to redirect printf to a handle (FILE*), just do
fprintf(handle, "...");
For example replicating printf with fprintf
fprintf(stdout, "...");
Or error reporting
fprintf(stderr, "FATAL: %s fails", "smurf");
This is also how you write to files. fprintf(file, "Blah.");