Is dialogue possible through pipes between mother and child process on Windows? - windows

I use CreateProcess and CreatePipe to spawn a child process and set up pipes between mother and child to communicate through. Then I use WriteFile to write to the write handle of the child's input pipe and ReadFile to read from the read handle of the child's output pipe. After having finished writing to the child I do CloseHandle on the write handle of the input pipe.
This all works well. However, I don't want it to work like this. I want to feed one line to the child, have the child compute something and output the results as a line of output, and then read that line of output from the mother. Then feed another line of input to the child and so on.
Unfortunately, when I skip the CloseHandle function call the two processes hang and nothing happens. So how can I reuse the pipes and avoid closing them? If I close them I have to create the child process again, right? That's a heavy operation, I suppose, and I really want to avoid that. Is there a good solution using pipes? I want the child process to run indefinitely and the communication to be a dialogue, alternating between writes and reads.

I solved it by using Windows kernel ReadFile and WriteFile instead of standard C functions in the child code. Here is the child code:
HANDLE inp = (HANDLE)_get_osfhandle(0);
HANDLE out = (HANDLE)_get_osfhandle(1);
char buffer[0x400];
unsigned long N;
while (ReadFile(inp, buffer, sizeof(buffer), &N, NULL) && N > 0)
{
WriteFile(out, buffer, N, &N, NULL);
}
And here is the mother code:
process app("child.exe");
app.write(string("hello\n"));
app.read().print();
app.write(string("world\n"));
app.read().print();
It prints:
hello
world

Related

Windows DuplicateHandle named pipe handle strange error 183 "file already exists"

I have a problem with DuplicateHandle (Win32). I try to duplicate a named pipe handle and I always get the Error 183 "file already exists". I do not understand this error message, because I try to create a copy of a file handle and the new file handle does not exist before. (Is there a start value required to overwrite?) This is my call:
return DuplicateHandle (MeshellProcessHandle, sourcehandle, HelperProcess, targethandle, 0, TRUE, DUPLICATE_SAME_ACCESS) != 0;
To understand what I am doing, I have to explain more extensively: I am working on a convenient editor frontend for the command line program cmd.exe. This project already works fine on the OS/2 Operating System which is very familiar with Win32, because from a historical point of view, the two OS were developed together until one year before finishing, where Microsoft and IBM went different ways.
The implementation of this program was quite tricky: There's a windowed front end editor program. This program creates named pipes for stderr, stdout and stderr, but from the reverse point of view (output from cmd.exe is input for the editor). Because of limited communication between different sessions, I had to program a "cmd helper program" which is a tiny commandline program holding several API calls and running in the same session as the cmd.exe program. The helper gets the editor process ID via commandline parameter and opens the existing pipes created by the windowed editor program and then redirects stdin/stdout/stderr to the pipes. The helper gets the process handle of the editor from the editor process ID via "OpenProcess" API call. Then the helper executes cmd.exe which automatically inherits the stdin/stdout/stderr handles and now cmd.exe writes to and reads from the pipe.
Another option would be to parse the full pipe names to cmd.exe without using DuplicateHandle, but I would prefer to be as close as to my solution which already works fine on the OS/2 Operating System.
I am still not sure why I have no access rights to duplicate the pipe handles. But I have found another solution: My helper console program starts the child process (cmd.exe) and within this child process, I want to use the named pipes instead of stdin/stdout/stderr - this is the reason why I want to use DuplicateHandle. Windows offers a convenient solution when starting the child process by using
CreateProcess (...)
With CreateProcess, you have always to hold parameters in the STARTUPINFO structure. And there are three handle variables you can set to redirect stdin/stdout/stderr to the three named pipes cmd_std*:
STARTUPINFO StartupInfo;
HFILE cmd_stdout, cmd_stdin, cmd_stderr;
//Open the existing pipes with CreateFile
//Start child process program
StartupInfo.hStdOutput = &cmd_stdout;
StartupInfo.hStdInput = &cmd_stdin;
StartupInfo.hStdError = &cmd_stderr;
CreateProcess (..., &StartupInfo, ...);
This solution has much less code than the variant with DuplicateHandle, because I also have to save and to restore the original file handles, so this way replaces 9 DuplicateHandle calls.
I did program the changes now but my new code described above does not work. If I set the "handle inheritance flag" in CreateProcess to TRUE, cmd.exe does not get executed, but CreateProcess returns TRUE (=OK). Here's my detailed code:
STARTUPINFO StartupInfo;
PROCESS_INFORMATION ProcessInformation;
HFILE cmd_stdout, cmd_stdin, cmd_stderr;
char *pprogstr, *pargstr;
//Open the existing pipes with CreateFile
//Start child process program cmd.exe
StartupInfo.hStdOutput = &cmd_stdout;
StartupInfo.hStdInput = &cmd_stdin;
StartupInfo.hStdError = &cmd_stderr;
StartupInfo.dwFlags = STARTF_USESTDHANDLES;
pprogstr = "C:\\WINDOWS\\system32\\cmd.exe";
pargstr = "/K";
CreateProcess (
pprogstr, // pointer to name of executable module
pargstr, // pointer to command line string
NULL, // pointer to process security attributes
NULL, // pointer to thread security attributes
TRUE, // handle inheritance flag
0, // creation flags
NULL, // pointer to new environment block
NULL, // pointer to current directory name
&StartupInfo, // pointer to STARTUPINFO
&ProcessInformation) // pointer to PROCESS_INFORMATION
Any idea?

Is there a way to half-close a FILE* file-handle?

My situation is this: In MacOS/X, I've called AuthorizationExecuteWithPrivileges to spawn a privileged child process, and the only way I have to communicate with the child process is by calling fread() and/or fwrite() on the FILE * file-handle returned to me by the final argument to that call.
What I want to do is indicate to the child process that it should go away, which I can do by calling fclose() on the file-handle -- the child process sees that its STDIN_FILENO has closed and responds by exiting.
However, I also want to be able to read any text that the child process printed to its stdout stream before exiting, but calling fclose() on the file-handle precludes doing that.
So my question is, is there any way to "half-close" a FILE *, such that is becomes closed-for-writing but still-open-for-reading? I'm imagining something analogous to the shutdown(SHUT_WR) that can be used on a socket-descriptor.

FFmpeg progress track visual C++

In my main process, i create a ffmpeg child process using CreateProcess(...).
I need to track the status of converting progress to update a progress bar. To do it, I read text from ffmpeg output and extract progress status from it.
I make a sample programm like this:
HANDLE rPipe, wPipe;
CreatePipe(&rPipe,&wPipe,&secattr,0);
STARTUPINFO sInfo;
ZeroMemory(&sInfo,sizeof(sInfo));
PROCESS_INFORMATION pInfo;
ZeroMemory(&pInfo,sizeof(pInfo));
sInfo.cb=sizeof(sInfo);
sInfo.dwFlags=STARTF_USESTDHANDLES;
sInfo.hStdInput=NULL;
sInfo.hStdOutput=wPipe;
sInfo.hStdError=wPipe;
// pStr contain ffmpeg command
CreateProcess(0,(LPTSTR)pStr,0,0,TRUE,NORMAL_PRIORITY_CLASS|CREATE_NO_WINDOW,0,0,&sInfo,&pInfo);
CloseHandle(wPipe);
BOOL ok;
do
{
memset(buf,0,bufsize);
ok=::ReadFile(rPipe,buf,100,&reDword,0);
result += buf;
}while(ok);
But I couldnt get "result" interactively updated. My app is held during conversion, and "result" string update only after ffmpeg's process finish.
How can I have my main process and ffmpeg's run simultaneously, and interactively read from/write to ffmpeg process's output/input?
Thanks for your time!
LRs
If the ffmpeg just uses stdout without explicitly flushing the output then it may not get sent to the calling process until it ends
Child processes that use such C run-time functions as printf() and
fprintf() can behave poorly when redirected. The C run-time functions
maintain separate IO buffers. When redirected, these buffers might not
be flushed immediately after each IO call. As a result, the output to
the redirection pipe of a printf() call or the input from a getch()
call is not flushed immediately and delays, sometimes-infinite delays
occur. This problem is avoided if the child process flushes the IO
buffers after each call to a C run-time IO function. Only the child
process can flush its C run-time IO buffers. A process can flush its C
run-time IO buffers by calling the fflush() function.
http://support.microsoft.com/kb/190351
In order of tracking the progress of your child process while it is running (and after its completion), you need to check the status of this child process.
After the process was launched, check the status periodically using the following code.
pi is the PROCESS_INFORMATION:
PROCESS_INFORMATION pi;
and the code:
DWORD exitCode = 0;
success = [GetExitCodeProcess][2](pi.hProcess, &exitCode);
exitCode will hold the value STILL_ACTIVE if the process is still running.
If the function succeeds, the return value of success is nonzero.

Disable buffering on redirected stdout Pipe (Win32 API, C++)

I'm spawning a process from Win32 using CreateProcess, setting the hStdOutput and hStdError properties of STARTUPINFO to pipe handles created with CreatePipe. I've got two threads reading the pipes, waiting for data to become available (or the process to complete, at which point it checks that there is no data left before terminating the thread).
As data becomes available, I write the output out to effectively a big textbox.
What's happening is the output is being buffered, so a slow running process just gets chunks of data thrown at the text box, but not "as it happens".
I'm not sure if it's the pipe that's doing the buffering, or something to do with the redirection.
Is there any way to either set the pipe to be unbuffered, or start the process in such a way that the stdout is sent as soon as possible?
I'm testing with a test app that prints lines one second apart
Here is line one
(waits one second)
Here is line two
(waits one second)
... etc
The buffering is probably in the C runtime (printf etc) and there is not much you can do about it (IIRC it does a isatty() check to determine a buffering strategy)
In my case the buffering was in the output of the client (as #Anders wrote), which uses normal printf. Maybe this also depends on the implementation of the C runtime (Visual Studio 2019), maybe the runtime detects 'not a console' and enables buffering.
So I disabled the buffering with this call
setvbuf(stdout, (char*)NULL, _IONBF, 0); in my client, now I get the output immediately in the pipe in the server.
Just for completeness: Here's how I read the pipe in the server
HANDLE in;
CreatePipe(&in, &startup.hStdOutput, &sa, 0); // Pipe for stdout of the child
...
char buffer[16384];
DWORD read, total;
while (PeekNamedPipe(in, NULL, 0, &read, &total, NULL))
{
if (total == 0)
{
if (WaitForSingleObject(info.hProcess, 0) == WAIT_OBJECT_0)
{
if (PeekNamedPipe(in, NULL, 0, &read, &total, NULL) && total == 0)
break;
continue;
}
Sleep(10);
continue;
}
if (total > sizeof(buffer))
total = sizeof(buffer);
ReadFile(in, buffer, total, &read, NULL);
...
}
There's SetNamedPipeHandleState, but it only controls buffering for remote pipes, not when both ends are on the same computer.
It seems to me you can solve the problem if you set the hStdOutput and hStdError of STARTUPINFO not to pipe handles created with CreatePipe, but instead of that you create a named pipes (with CallNamedPipe function exactly like you used if before also using SECURITY_ATTRIBUTES with bInheritHandle = TRUE, see http://msdn.microsoft.com/en-us/library/aa365782.aspx) and then open there by name with respect of CreateFile using FILE_FLAG_WRITE_THROUGH flag. Like you can read on the MSDN (http://msdn.microsoft.com/en-us/library/aa365592.aspx):
The pipe client can use CreateFile to
enable overlapped mode by specifying
FILE_FLAG_OVERLAPPED or to enable
write-through mode by specifying
FILE_FLAG_WRITE_THROUGH.
So just reopen the pipe with respect of CreateFile using FILE_FLAG_WRITE_THROUGH flag and set the handle/handles to hStdOutput and hStdError of STARTUPINFO.

How can a C/C++ program put itself into background?

What's the best way for a running C or C++ program that's been launched from the command line to put itself into the background, equivalent to if the user had launched from the unix shell with '&' at the end of the command? (But the user didn't.) It's a GUI app and doesn't need any shell I/O, so there's no reason to tie up the shell after launch. But I want a shell command launch to be auto-backgrounded without the '&' (or on Windows).
Ideally, I want a solution that would work on any of Linux, OS X, and Windows. (Or separate solutions that I can select with #ifdef.) It's ok to assume that this should be done right at the beginning of execution, as opposed to somewhere in the middle.
One solution is to have the main program be a script that launches the real binary, carefully putting it into the background. But it seems unsatisfying to need these coupled shell/binary pairs.
Another solution is to immediately launch another executed version (with 'system' or CreateProcess), with the same command line arguments, but putting the child in the background and then having the parent exit. But this seems clunky compared to the process putting itself into background.
Edited after a few answers: Yes, a fork() (or system(), or CreateProcess on Windows) is one way to sort of do this, that I hinted at in my original question. But all of these solutions make a SECOND process that is backgrounded, and then terminate the original process. I was wondering if there was a way to put the EXISTING process into the background. One difference is that if the app was launched from a script that recorded its process id (perhaps for later killing or other purpose), the newly forked or created process will have a different id and so will not be controllable by any launching script, if you see what I'm getting at.
Edit #2:
fork() isn't a good solution for OS X, where the man page for 'fork' says that it's unsafe if certain frameworks or libraries are being used. I tried it, and my app complains loudly at runtime: "The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec()."
I was intrigued by daemon(), but when I tried it on OS X, it gave the same error message, so I assume that it's just a fancy wrapper for fork() and has the same restrictions.
Excuse the OS X centrism, it just happens to be the system in front of me at the moment. But I am indeed looking for a solution to all three platforms.
My advice: don't do this, at least not under Linux/UNIX.
GUI programs under Linux/UNIX traditionally do not auto-background themselves. While this may occasionally be annoying to newbies, it has a number of advantages:
Makes it easy to capture standard error in case of core dumps / other problems that need debugging.
Makes it easy for a shell script to run the program and wait until it's completed.
Makes it easy for a shell script to run the program in the background and get its process id:
gui-program &
pid=$!
# do something with $pid later, such as check if the program is still running
If your program forks itself, this behavior will break.
"Scriptability" is useful in so many unexpected circumstances, even with GUI programs, that I would hesitate to explicitly break these behaviors.
Windows is another story. AFAIK, Windows programs automatically run in the background--even when invoked from a command shell--unless they explicitly request access to the command window.
On Linux, daemon() is what you're looking for, if I understand you correctly.
The way it's typically done on Unix-like OSes is to fork() at the beginning and exit from the parent. This won't work on Windows, but is much more elegant than launching another process where forking exists.
Three things need doing,
fork
setsid
redirect STDIN, STDOUT and STDERR to /dev/null
This applies to POSIX systems (all the ones you mention claim to be POSIX (but Windows stops at the claiming bit))
On UNIX, you need to fork twice in a row and let the parent die.
A process cannot put itself into the background, because it isn't the one in charge of background vs. foreground. That would be the shell, which is waiting for process exit. If you launch a process with an ampersand "&" at the end, then the shell does not wait for process exit.
But the only way the process can escape the shell is to fork off another child and then let its original self exit back to the waiting shell.
From the shell, you can background a process with Control-Z, then type "bg".
Backgrounding a process is a shell function, not an OS function.
If you want an app to start in the background, the typical trick is to write a shell script to launch it that launches it in the background.
#! /bin/sh
/path/to/myGuiApplication &
To followup on your edited question:
I was wondering if there was a way to put the EXISTING process into the background.
In a Unix-like OS, there really is not a way to do this that I know of. The shell is blocked because it is executing one of the variants of a wait() call, waiting for the child process to exit. There is not a way for the child process to remain running but somehow cause the shell's wait() to return with a "please stop watching me" status. The reason you have the child fork and exit the original is so the shell will return from wait().
Here is some pseudocode for Linux/UNIX:
initialization_code()
if(failure) exit(1)
if( fork() > 0 ) exit(0)
setsid()
setup_signal_handlers()
for(fd=0; fd<NOFILE; fd++) close(fd)
open("/dev/null", O_RDONLY)
open("/dev/null", O_WRONLY)
open("/dev/null", o_WRONLY)
chdir("/")
And congratulations, your program continues as an independent "daemonized" process without a controlling TTY and without any standard input or output.
Now, in Windows you simply build your program as a Win32 application with WinMain() instead of main(), and it runs without a console automatically. If you want to run as a service, you'll have to look that up because I've never written one and I don't really know how they work.
You edited your question, but you may still be missing the point that your question is a syntax error of sorts -- if the process wasn't put in the background to begin with and you want the PID to stay the same, you can't ignore the fact that the program which started the process is waiting on that PID and that is pretty much the definition of being in the foreground.
I think you need to think about why you want to both put something in the background and keep the PID the same. I suggest you probably don't need both of those constraints.
As others mentioned, fork() is how to do it on *nix. You can get fork() on Windows by using MingW or Cygwin libraries. But those will require you to switch to using GCC as your compiler.
In pure Windows world, you'd use CreateProcess (or one of its derivatives CreateProcessAsUser, CreateProcessWithLogonW).
The simplest form of backgrounding is:
if (fork() != 0) exit(0);
In Unix, if you want to background an disassociate from the tty completely, you would do:
Close all descriptors which may access a tty (usually 0, 1, and 2).
if (fork() != 0) exit(0);
setpgroup(0,getpid()); /* Might be necessary to prevent a SIGHUP on shell exit. */
signal(SIGHUP,SIG_IGN); /* just in case, same as using nohup to launch program. */
fd=open("/dev/tty",O_RDWR);
ioctl(fd,TIOCNOTTY,0); /* Disassociates from the terminal */
close(fd);
if (fork() != 0) exit(0); /* just for good measure */
That should fully daemonize your program.
The most common way of doing this under Linux is via forking. The same should work on Mac, as for Windows I'm not 100% sure but I believe they have something similar.
Basically what happens is the process splits itself into two processes, and then the original one exits (returning control to the shell or whatever), and the second process continues to run in the background.
I'm not sure about Windows, but on UNIX-like systems, you can fork() then setsid() the forked process to move it into a new process group that is not connected to a terminal.
Under Windows, the closing thing you're going to get to fork() is loading your program as a Windows service, I think.
Here is a link to an intro article on Windows services...
CodeProject: Simple Windows Service Sample
So, as you say, just fork()ing will not do the trick. What you must do is fork() and then re-exec(), as this code sample does:
#include stdio.h>
#include <unistd.h>
#include <string.h>
#include <CoreFoundation/CoreFoundation.h>
int main(int argc, char **argv)
{
int i, j;
for (i=1; i<argc; i++)
if (strcmp(argv[i], "--daemon") == 0)
{
for (j = i+1; j<argc; j++)
argv[j-1] = argv[j];
argv[argc - 1] = NULL;
if (fork()) return 0;
execv(argv[0], argv);
return 0;
}
sleep(1);
CFRunLoopRun();
CFStringRef hello = CFSTR("Hello, world!");
printf("str: %s\n", CFStringGetCStringPtr(hello, CFStringGetFastestEncoding(hello)));
return 0;
}
The loop is to check for a --daemon argument, and if it is present, remove it before re-execing so an infinite loop is avoided.
I don't think this will work if the binary is put into the path because argv[0] is not necessarily a full path, so it will need to be modified.
/**Deamonize*/
pid_t pid;
pid = fork(); /**father makes a little deamon(son)*/
if(pid>0)
exit(0); /**father dies*/
while(1){
printf("Hello I'm your little deamon %d\n",pid); /**The child deamon goes on*/
sleep(1)
}
/** try 'nohup' in linux(usage: nohup <command> &) */
In Unix, I have learned to do that using fork().
If you want to put a running process into the background, fork it twice.
I was trying the solution.
Only one fork is needed from the parent process.
The most important point is that, after fork, the parent process must die by calling _exit(0); and NOT by calling exit(0);
When _exit(0); is used, the command prompt immediately returns on the shell.
This is the trick.
If you need a script to have the PID of the program, you can still get it after a fork.
When you fork, save the PID of the child in the parent process. When you exit the parent process, either output the PID to STD{OUT,ERR} or simply have a return pid; statement at the end of main(). A calling script can then get the pid of the program, although it requires a certain knowledge of how the program works.

Resources