Facing trouble in understanding windows client server architecture for processes? - visual-studio-2010

I have written a simple program that goes like this ..
#include<stdio.h>
int main()
{
int i=0;
while(1)
printf("......%d...\n",i++);
return 0;
}
When this process will run under windows, it will act like a client and the server will be Csrss.exe. Now my question is that when this client will try to print something it will send a request to the server, and the further process about printing is done by the server(Csrss.exe).But what would happen with the client? the client process will continue executing without bothering about whether the value actually get printed or not? or the server will block the client until it get some notification from system space ??
If you are going with the second solution then please also explain that in MSDN it is written that after using CreateProcess() we should use WaitForInputIdle() API to make sure that the object is actually created in the system space. So what I can get from this statement is that server will not block the client after making a request ..
and if you are going with the first solution then the output of the program is correct, i mean not a single value of i got missed ??

Basically, your process' standard output is a pipe to csrss.exe. printf writes to that pipe. It would block at least until the data is stored in some system buffer. Whether it blocks all the way until the server reads the text from the pipe and renders it on the screen, I'm not sure, and I don't quite see why it would matter.
WaitForInputIdle is only significant for windowed applications, not for console ones. It waits until the target process calls GetMessage or similar for the first time. Console processes don't typically do that. From the documentation: "If this process is a console application ..., WaitForInputIdle returns immediately."
I don't understand your concern about "missed values".

Related

Making STDIN unbuffered under Windows in Perl

I am trying to do input processing (from the console) in Perl asynchronously. My first approach was to use IO::Select but that does not work under Windows.
I then came across the post Non-buffered processor in Perl which roughly suggests this:
binmode STDIN;
binmode STDOUT;
STDIN->blocking(0) or warn $!;
STDOUT->autoflush(1);
while (1) {
my $buffer;
my $read_count = sysread(STDIN, $buffer, 4096);
if (not defined($read_count)) {
next;
} elsif (0 == $read_count) {
exit 0;
}
}
That works as expected for regular Unix systems but not for Windows, where the sysread actually does block. I have tested that on Windows 10 with 64-bit Strawberry Perl 5.32.1.
When you check the return value of blocking() (as done in the code above), it turns out that the call fails with the funny error message "An operation was attempted on something that is not a socket".
Edit: My application is a chess engine that theoretically can be run interactively in a terminal but usually communicates via pipes with a GUI. Therefore, Win32::Console does not help.
Has something changed since the blog post had been published? The author explicitely claims that this approach would work for Windows. Any other option that I can go with, maybe some module from the Win32:: namespace?
The solution I now implemented in https://github.com/gflohr/Chess-Plisco/blob/main/lib/Chess/Plisco/Engine.pm (search for the method __msDosSocket()) can be outlined as follows:
If Windows is detected as the operating system, create a temporary file as a Unix domain socket with IO::Socket::Unix for writing.
Do a fork() which actually creates a thread in Perl for Windows because the system does not have a real fork().
In the "parent", create another instance of IO::Socket::Unix with the same path for reading.
In the "child", read from standard input with getline(). This blocks, of course. Every line read is echoed to the write end of the socket.
The "parent" uses the read-end of the socket as a replacement for standard input and puts it into non-blocking mode. That works even under Windows because it is a socket.
From here on, everything is working the same as under Unix: All input is read in non-blocking mode with IO::Select.
Instead of a Unix domain socket it is probably wiser to route the communication through the loopback interface because under Windows it is hard to guarantee that a temporary file gets deleted when the process terminates since you cannot unlink it while it is in use. It is also stated in the comments that IO::Socket::UNIX may not work under older Windows versions, and so inet sockets are probably more portable to use.
I also had trouble to terminate both threads. A call to kill() does not seem to work. In my case, the protocol that the program implements is so that the command "quit" read from standard input should cause the program to terminate. The child thread therefore checks, whether the line read was "quit" and terminates with exit in that case. A proper solution should find a better way for letting the parent kill the child.
I did not bother to ignore SIGCHLD (because it doesn't exist under Windows) or call wait*() because fork does not spawn a new process image under Windows but only a new thread.
This approach is close to the one suggested in one of the comments to the question, only that the thread comes in disguise as a child process created by fork().
The other suggestion was to use the module Win32::Console. This does not work for two reasons:
As the name suggests, it only works for the console. But my software is a backend for a GUI frontend and rarely runs in a console.
The underlying API is for keyboard and mouse events. It works fine for key strokes and most mouse events, but polling an event blocks as soon as the user has selected something with the mouse. So even for a real console application, this approach would not work. A solution built on Win32::Console must also handle events like pressing the CTRL, ALT or Shift key because they will not guarantee that input can be read immediately from the tty.
It is somewhat surprising that a task as trivial as non-blocking I/O on a file descriptor is so hard to implement in a portable way in Perl because Windows actually has a similar concept called "overlapped" I/O. I tried to understand that concept, failed at it, and concluded that it is true to the Windows maxim "make easy things hard, and hard things impossible". Therefore I just cannot blame the Perl developers for not using it as an emulation of non-blocking I/O. Maybe it is simply not possible.

Confusion about rubys IO#(read/write)_nonblock calls

I am currently doing the Ruby on the Web project for The Odin Project. The goal is to implement a very basic webserver that parses and responds to GET or POST requests.
My solution uses IO#gets and IO#read(maxlen) together with the Content-Length Header attribute to do the parsing.
Other solution use IO#read_nonblock. I googled for it, but was quite confused with the documentation for it. It's often mentioned together with Kernel#select, which didn't really help either.
Can someone explain to me what the nonblock calls do differently than the normal ones, how they avoid blocking the thread of execution, and how they play together with the Kernel#select method?
explain to me what the nonblock calls do differently than the normal ones
The crucial difference in behavior is when there is no data available to read at call time, but not at EOF:
read_nonblock() raises an exception kind of IO::WaitReadable
normal read(length) waits until length bytes are read (or EOF)
how they avoid blocking the thread of execution
According to the documentation, #read_nonblock is using the read(2) system call after O_NONBLOCK is set for the underlying file descriptor.
how they play together with the Kernel#select method?
There's also IO.select. We can use it in this case to wait for availability of input data, so that a subsequent read_nonblock() won't cause an error. This is especially useful if there are multiple input streams, where it is not known from which stream data will arrive next and for which read() would have to be called.
In a blocking write you wait until bytes got written to a file, on the other hand a nonblocking write exits immediately. It means, that you can continue to execute your program, while operating system asynchronously writes data to a file. Then, when you want to write again, you use select to see whether the file is ready to accept next write.

Maximum size of pipe used by CreateProcess

I'm currently using this example as a guide to redirect standard error of a child process launched by CreateProcess.
However unlike the example currently I'm waiting until the process finishes (checking GetExitCodeProcess), closing the pipe and then reading the error if a non-zero return code comes back.
However I've since read if the pipe fills up the client process will block until the pipe is cleared. The reason I'm not currently reading from the pipe during execution is that the ReadFile call blocks during execution (standard error is only output at the end) so I can't pump the message queue to avoid the GUI from "ghosting" and being marked not responding.
I can't find any reference to how big the pipe is by default (although I can set a size myself), is this something I need to worry about given I'm buffering the output into a string variable for later use anyway? (ie. it would need to fit into the available memory for the process so it has a hard limit there, it's not going to a file like most of the examples have)

Namedpipe writeFIle questions Win32

I am writing a win32 app which is using the namedpipe for inter-process communication. When one process is trying to writeFile, it will write the structure (tell other process how many bytes and other info), then it will write the actual data by calling WriteFile again.
The other process, when it is reading, it read the first msg, and then read the second msg based on the information got from the first msg.
My questions are:
If the server process is writing the data, but the client process hasn't read it yet, is it possible to lost the first msg when the client is reading? Example, when the server is calling WriteFile at the second time to write actual data, will the previous msg was overwritten?
Is there any best solution to use waitforsingleobject to sync?
Thanks
A pipe is a little like a real pipe -- when you write more to the pipe, it doesn't overwrite what was already in the pipe. It just adds more data to the pipe that will be delivered after the data that you previously wrote to the pipe.
I rarely find WaitForSingleObject useful for a pipe. If you want to block the current thread until it receives data from the pipe, you can just do a synchronous read, and it'll block until there's data. If you want to block until there's input from any of a number of sources, you usually want WaitForMultipleObjects or MsgWaitForMultipleObjects, so your thread will run when any of the sources has input to process.
The only times I can recall using WaitForSingleObject on a pipe were with a zero timeout, so the receiver would continue other processing if there was no pipe input, and every once in a while check if the pipe has some data to process. While it initially seems like PeekNamedPipe would work for this, it's really most useful for other purposes -- though it might work for you, to read the header data and figure out what other code to invoke to read and process the entire message.
Having said all that, I feel obliged to point out that I haven't written any new code using named pipes in quite a while. I can think of very few situations in which I'd even consider them today -- I'd almost always use sockets instead.

Get current state of caps/scroll/numlock in windows w/o using Peek/ReadConsoleInput()

I'm programming a Windows console application in plain C and using PeekConsoleInput/ReadConsoleInput to get keystrokes from the user and process them.
I need to get the current state of the Caps Lock, Scroll Lock, and Num Lock keys when the program starts, before the user has entered anything. Meaning there would be no KEY_EVENTs in the message queue to process.
Is this possible to do? If so, how? I've looked at most of the functions in wincon.h and nothing seems appropriate.
You can call GetAsyncKeyState three times, and it will usually work, but there are a few cases where it still won't work for you. The arguments for your three calls would be VK_CAPITAL, VK_SCROLL, and VK_NUMLOCK.

Resources