I am writing a win32 app which is using the namedpipe for inter-process communication. When one process is trying to writeFile, it will write the structure (tell other process how many bytes and other info), then it will write the actual data by calling WriteFile again.
The other process, when it is reading, it read the first msg, and then read the second msg based on the information got from the first msg.
My questions are:
If the server process is writing the data, but the client process hasn't read it yet, is it possible to lost the first msg when the client is reading? Example, when the server is calling WriteFile at the second time to write actual data, will the previous msg was overwritten?
Is there any best solution to use waitforsingleobject to sync?
Thanks
A pipe is a little like a real pipe -- when you write more to the pipe, it doesn't overwrite what was already in the pipe. It just adds more data to the pipe that will be delivered after the data that you previously wrote to the pipe.
I rarely find WaitForSingleObject useful for a pipe. If you want to block the current thread until it receives data from the pipe, you can just do a synchronous read, and it'll block until there's data. If you want to block until there's input from any of a number of sources, you usually want WaitForMultipleObjects or MsgWaitForMultipleObjects, so your thread will run when any of the sources has input to process.
The only times I can recall using WaitForSingleObject on a pipe were with a zero timeout, so the receiver would continue other processing if there was no pipe input, and every once in a while check if the pipe has some data to process. While it initially seems like PeekNamedPipe would work for this, it's really most useful for other purposes -- though it might work for you, to read the header data and figure out what other code to invoke to read and process the entire message.
Having said all that, I feel obliged to point out that I haven't written any new code using named pipes in quite a while. I can think of very few situations in which I'd even consider them today -- I'd almost always use sockets instead.
Related
I am currently doing the Ruby on the Web project for The Odin Project. The goal is to implement a very basic webserver that parses and responds to GET or POST requests.
My solution uses IO#gets and IO#read(maxlen) together with the Content-Length Header attribute to do the parsing.
Other solution use IO#read_nonblock. I googled for it, but was quite confused with the documentation for it. It's often mentioned together with Kernel#select, which didn't really help either.
Can someone explain to me what the nonblock calls do differently than the normal ones, how they avoid blocking the thread of execution, and how they play together with the Kernel#select method?
explain to me what the nonblock calls do differently than the normal ones
The crucial difference in behavior is when there is no data available to read at call time, but not at EOF:
read_nonblock() raises an exception kind of IO::WaitReadable
normal read(length) waits until length bytes are read (or EOF)
how they avoid blocking the thread of execution
According to the documentation, #read_nonblock is using the read(2) system call after O_NONBLOCK is set for the underlying file descriptor.
how they play together with the Kernel#select method?
There's also IO.select. We can use it in this case to wait for availability of input data, so that a subsequent read_nonblock() won't cause an error. This is especially useful if there are multiple input streams, where it is not known from which stream data will arrive next and for which read() would have to be called.
In a blocking write you wait until bytes got written to a file, on the other hand a nonblocking write exits immediately. It means, that you can continue to execute your program, while operating system asynchronously writes data to a file. Then, when you want to write again, you use select to see whether the file is ready to accept next write.
I'm currently using this example as a guide to redirect standard error of a child process launched by CreateProcess.
However unlike the example currently I'm waiting until the process finishes (checking GetExitCodeProcess), closing the pipe and then reading the error if a non-zero return code comes back.
However I've since read if the pipe fills up the client process will block until the pipe is cleared. The reason I'm not currently reading from the pipe during execution is that the ReadFile call blocks during execution (standard error is only output at the end) so I can't pump the message queue to avoid the GUI from "ghosting" and being marked not responding.
I can't find any reference to how big the pipe is by default (although I can set a size myself), is this something I need to worry about given I'm buffering the output into a string variable for later use anyway? (ie. it would need to fit into the available memory for the process so it has a hard limit there, it's not going to a file like most of the examples have)
I need to send data from child processes to parent. Some of this data is HTML, plain text, etc. but it may also be necessary send image data, zip file data, etc.
As I understand it, anonymous pipes use the child process standard input and standard output. Conventionally stdin and stdout only convey textual data: would there be any problem with sending non printable characters using this mechanism?
There is no relation between anonymous pipes and stdin/out. As one process has only one stdin/out, you could create only one anonymous pipe per process that way, which sounds stupid, doesn't it? You can redirect stdin/out of a child process to the pipe, yes. But you don't have to, if the child process is able to report itself by another means (like logfile or network activity). A call to CreatePipe gives you reading and writing handles and it's up to you how you use them. Sending arbitrary binary data is indeed possible. Anonymous pipe is in no way different from named pipe in that respect.
Even if you do choose to use stdin/stdout redirection to pass the pipe handle(s) to the child process, you shouldn't have any problems provided the child process uses the Windows API to send the data rather than the C runtime library functions.
That is, WriteFile will work perfectly, but printf would not be a good idea.
You can use GetStdHandle to get the handle(s) to the pipe(s) for use with the Windows API functions.
The systemu page says:
systemu can be used on any platform to return status, stdout, and stderr of any command. unlike other methods like open3/popen4 there is zero danger of full pipes or threading issues hanging your process or subprocess.
(https://github.com/ahoward/systemu)
Could anyone explain this a little bit?
Methods like popen and its various spinoffs are convenient and are part of the expected API for a full I/O library.
However, they must be used either casually or carefully because they are prone to deadlock. By casually, I mean, if you both write and read from the command, it's still OK as long as you either don't write a lot or don't read a lot. By carefully, I mean, you can move large amounts of data, but only if you keep the inner details of the operation in mind and deliberately engineer against deadlock.
Imagine writing lots of stuff to your popened command and then reading a result. If you write more than a pipe will buffer, then your process will sleep. That's OK in practice, most of the time, but what if the command has to write a lot of stuff? Now it may sleep and not finish reading input that you are sending. You won't finish sending input so you will never wake up and read results.
Deadlock!
I have some basic questions about pipes I am unsure about.
a) What is the standard behavior if a process writing to a pipe gets killed (ie. SIGKILL SIGINT) Does it close the pipe? Does it flush the pipe? Or is the behavior undefined?
b) What is the standard behavior if a process returns normally? Is it guaranteed to flush the pipe and close the pipe? (without explicitly doing so of course).
I would like these answers to be as general as possible, but in reality if it depends entirely on the OS specs I can accept that! However, if there is a Posix standard or a current defined Windows behavior I would be very grateful to know.
Thanks.
a. What is the standard behavior if a process writing to a pipe gets killed (ie. SIGKILL SIGINT) Does it close the pipe? Does it flush the pipe? Or is the behavior undefined?
SIGKILL never allows any cleanup - the process dies, dead. With SIGINT, it depends on whether the process handles the signal. If so, it is likely to exit via exit(2), which flushes standard I/O file handles. The question is - was the pipe connected to standard output or via popen()? If so, outstanding buffered data may be flushed; if not, there is no buffered data so flushing is immaterial.
If there is unread data in the pipe, that data remains in the pipe, ready for the reader to collect - assuming there is a reader.
b. What is the standard behavior if a process returns normally? Is it guaranteed to flush the pipe and close the pipe? (without explicitly doing so of course).
It depends on whether the pipe was connected via standard I/O or not. If not, there is nothing pending. If so, then yes, any material in the buffers will be flushed as the standard I/O stream is closed.
c. Thanks for the info on signals and the unread data, but I'm a little confused about the standard I/O pipe connection. After you mentioned popen() I looked it up and the man page says its return value identical to an I/O stream and the streams are fully buffered by default. I'm just not clear on the difference between the two nor do I understand where the difference comes from.
The basic system call for creating pipes is pipe(2). It creates two file descriptors, one for the read end of the pipe, one for the write end. If you do nothing else with them, then they remain as file descriptors, with unbuffered output (via write(2) and related system calls). If the process terminates, there is no buffering in the application; the pipe is closed.
If you use popen(3), then it does a whole lot more work for you. It still invokes pipe(2) to create the pipes, but it then does a fork(2). The child arranges the correct configuration of the pipes and launches the child process. The parent also closes the unused end of the pipe, and uses fdopen(3) to create a standard I/O file stream for the calling process to use.
With the file stream, if there is data in the I/O buffer, then a close or equivalent will ensure that the outstanding data is flushed and the file descriptor is closed.
The normal behaviour is that all file descriptors are closed when a process terminates. This means that a pipe, like any other open file descriptor, is closed normally.
One interesting thing about pipes, though: in POSIX, if a process writes to a pipe that has been closed, the writer will get a signal, SIGPIPE.
Edit:
A caveat: The difference between s SIGx termination and a normal termination is that, like any other file write, you may loose data that has been buffered (via a FILE write) and not yet written to the file descriptor.