What is the purpose of closeAfterStart in exec - go

I'm reading go exec source code. https://cs.opensource.google/go/go/+/refs/tags/go1.17.3:src/os/exec/exec.go
When Stdinpipe is called, the reader is added to an array closeAfterStart. When Start() is called, the reader is closed. I'm not sure to understand why they close the reader just after starting the process.

To mirror what Penélope Stevens is saying, os.Pipe maps to an underlying os File pipe. By the time the *os.File returned by os.Pipe is closed it has already been passed to the new spawned process. The close will close the file descriptor in this process, but the spawned process can still read/write from that pipe.
The file descriptor is grabbed here: https://cs.opensource.google/go/go/+/refs/tags/go1.17.3:src/os/exec/exec.go;l=404-415;drc=refs%2Ftags%2Fgo1.17.3
And then passed to the spawned process with ProcAttr: https://pkg.go.dev/os#ProcAttr

Related

Is there a way to half-close a FILE* file-handle?

My situation is this: In MacOS/X, I've called AuthorizationExecuteWithPrivileges to spawn a privileged child process, and the only way I have to communicate with the child process is by calling fread() and/or fwrite() on the FILE * file-handle returned to me by the final argument to that call.
What I want to do is indicate to the child process that it should go away, which I can do by calling fclose() on the file-handle -- the child process sees that its STDIN_FILENO has closed and responds by exiting.
However, I also want to be able to read any text that the child process printed to its stdout stream before exiting, but calling fclose() on the file-handle precludes doing that.
So my question is, is there any way to "half-close" a FILE *, such that is becomes closed-for-writing but still-open-for-reading? I'm imagining something analogous to the shutdown(SHUT_WR) that can be used on a socket-descriptor.

Why doesn't os/exec.CombinedOutput() have a race condition?

The Go bytes.Buffer isn't thread-safe. Yet, when I read the source code I notice that os/exec.CombinedOutput() uses the same buffer for both c.Stdout and c.Stderr. Further reading the implementation of the package it looks like there is no synchronisation when writing to the c.Stderr/c.Stdout here.
Am I missing something or have I found a possible synchronisation issue? AFAIK stderr and stdout can be written to concurrently by the child process.
Stderr and Stdout are not written concurrently if they are the same writer as stated in the Cmd documentation:
If Stdout and Stderr are the same writer, at most one goroutine at a time will call Write.
This feature is implemented in the Cmd.stderr function. If Stdout and Stderr are the same, then the same fd is passed the child process stdout and stderr. In the case where the fd is a pipe with a goroutine to pump it, there's only one goroutine writing to Stdout/Stderr.

If IO.pipe behaves like resource, why is it required to close the pipe in the child process?

The following code works, but if reader and writer are shared resource across parent and child process, why are they closed in the first place?
reader, writer = IO.pipe
fork do
reader.close
writer.puts "foobar"
end
writer.close
puts reader.read
This makes no sense to me, because I think the reader and writer should be closed after the write operation like the following code I made
reader, writer = IO.pipe
fork do
writer.puts "foobar"
writer.close
end
Process.wait
puts reader.read
reader.close
I don't know why it doesn't work. Can anyone give me an idea?
What is going on is, quoted from Stormier, Jesse. Working with UNIX Processes (http://workingwithunixprocesses.com, 2012) p. 93:
...when the reader calls IO#read it will continue trying to read data until it sees an EOF (aka. end-of-file marker [2]). This tells the reader that no more data will be available for reading.
So long as the writer is still open the reader might see more data, so it waits. By closing the writer before reading it puts an EOF on the pipe so the reader stops reading after it gets the initial data. If you skip closing the writer then the reader will block and continue trying to read indefinitely.
I highly recommend giving his book a read if you are going to be working a lot with IO (including sockets).
As stated in your other question How to maintain the TCP connection using Ruby? you can manually force the buffer to flush using IO#flush or set the buffer to always sync after a write / read, by setting IO#sync= to true.

FFmpeg progress track visual C++

In my main process, i create a ffmpeg child process using CreateProcess(...).
I need to track the status of converting progress to update a progress bar. To do it, I read text from ffmpeg output and extract progress status from it.
I make a sample programm like this:
HANDLE rPipe, wPipe;
CreatePipe(&rPipe,&wPipe,&secattr,0);
STARTUPINFO sInfo;
ZeroMemory(&sInfo,sizeof(sInfo));
PROCESS_INFORMATION pInfo;
ZeroMemory(&pInfo,sizeof(pInfo));
sInfo.cb=sizeof(sInfo);
sInfo.dwFlags=STARTF_USESTDHANDLES;
sInfo.hStdInput=NULL;
sInfo.hStdOutput=wPipe;
sInfo.hStdError=wPipe;
// pStr contain ffmpeg command
CreateProcess(0,(LPTSTR)pStr,0,0,TRUE,NORMAL_PRIORITY_CLASS|CREATE_NO_WINDOW,0,0,&sInfo,&pInfo);
CloseHandle(wPipe);
BOOL ok;
do
{
memset(buf,0,bufsize);
ok=::ReadFile(rPipe,buf,100,&reDword,0);
result += buf;
}while(ok);
But I couldnt get "result" interactively updated. My app is held during conversion, and "result" string update only after ffmpeg's process finish.
How can I have my main process and ffmpeg's run simultaneously, and interactively read from/write to ffmpeg process's output/input?
Thanks for your time!
LRs
If the ffmpeg just uses stdout without explicitly flushing the output then it may not get sent to the calling process until it ends
Child processes that use such C run-time functions as printf() and
fprintf() can behave poorly when redirected. The C run-time functions
maintain separate IO buffers. When redirected, these buffers might not
be flushed immediately after each IO call. As a result, the output to
the redirection pipe of a printf() call or the input from a getch()
call is not flushed immediately and delays, sometimes-infinite delays
occur. This problem is avoided if the child process flushes the IO
buffers after each call to a C run-time IO function. Only the child
process can flush its C run-time IO buffers. A process can flush its C
run-time IO buffers by calling the fflush() function.
http://support.microsoft.com/kb/190351
In order of tracking the progress of your child process while it is running (and after its completion), you need to check the status of this child process.
After the process was launched, check the status periodically using the following code.
pi is the PROCESS_INFORMATION:
PROCESS_INFORMATION pi;
and the code:
DWORD exitCode = 0;
success = [GetExitCodeProcess][2](pi.hProcess, &exitCode);
exitCode will hold the value STILL_ACTIVE if the process is still running.
If the function succeeds, the return value of success is nonzero.

file_operations Question, how do i know if a process that opened a file for writing has decided to close it?

I'm currently writing a simple "multicaster" module.
Only one process can open a proc filesystem file for writing, and the rest can open it for reading.
To do so i use the inode_operation .permission callback, I check the operation and when i detect someone open a file for writing I set a flag ON.
i need a way to detect if a process that opened a file for writing has decided to close the file so i can set the flag OFF, so someone else can open for writing.
Currently in case someone is open for writing i save the current->pid of that process and when the .close callback is called I check if that process is the one I saved earlier.
Is there a better way to do that? Without saving the pid, perhaps checking the files that the current process has opened and it's permission...
Thanks!
No, it's not safe. Consider a few scenarios:
Process A opens the file for writing, and then fork()s, creating process B. Now both A and B have the file open for writing. When Process A closes it, you set the flag to 0 but process B still has it open for writing.
Process A has multiple threads. Thread X opens the file for writing, but Thread Y closes it. Now the flag is stuck at 1. (Remember that ->pid in kernel space is actually the userspace thread ID).
Rather than doing things at the inode level, you should be doing things in the .open and .release methods of your file_operations struct.
Your inode's private data should contain a struct file *current_writer;, initialised to NULL. In the file_operations.open method, if it's being opened for write then check the current_writer; if it's NULL, set it to the struct file * being opened, otherwise fail the open with EPERM. In the file_operations.release method, check if the struct file * being released is equal to the inode's current_writer - if so, set current_writer back to NULL.
PS: Bandan is also correct that you need locking, but the using the inode's existing i_mutex should suffice to protect the current_writer.
I hope I understood your question correctly: When someone wants to write to your proc file, you set a variable called flag to 1 and also save the current->pid in a global variable. Then, when any close() entry point is called, you check current->pid of the close() instance and compare that with your saved value. If that matches, you turn flag to off. Right ?
Consider this situation : Process A wants to write to your proc resource, and so you check the permission callback. You see that flag is 0, so you can set it to 1 for process A. But at that moment, the scheduler finds out process A has used up its time share and chooses a different process to run(flag is still o!). After sometime, process B comes up wanting to write to your proc resource also, checks that the flag is 0, sets it to 1, and then goes about writing to the file. Unfortunately at this moment, process A gets scheduled to run again and since, it thinks that flag is 0 (remember, before the scheduler pre-empted it, flag was 0) and so sets it to 1 and goes about writing to the file. End result : data in your proc resource goes corrupt.
You should use a good locking mechanism provided by the kernel for this type of operation and based on your requirement, I think RCU is the best : Have a look at RCU locking mechanism

Resources