The Go bytes.Buffer isn't thread-safe. Yet, when I read the source code I notice that os/exec.CombinedOutput() uses the same buffer for both c.Stdout and c.Stderr. Further reading the implementation of the package it looks like there is no synchronisation when writing to the c.Stderr/c.Stdout here.
Am I missing something or have I found a possible synchronisation issue? AFAIK stderr and stdout can be written to concurrently by the child process.
Stderr and Stdout are not written concurrently if they are the same writer as stated in the Cmd documentation:
If Stdout and Stderr are the same writer, at most one goroutine at a time will call Write.
This feature is implemented in the Cmd.stderr function. If Stdout and Stderr are the same, then the same fd is passed the child process stdout and stderr. In the case where the fd is a pipe with a goroutine to pump it, there's only one goroutine writing to Stdout/Stderr.
Related
I'm reading go exec source code. https://cs.opensource.google/go/go/+/refs/tags/go1.17.3:src/os/exec/exec.go
When Stdinpipe is called, the reader is added to an array closeAfterStart. When Start() is called, the reader is closed. I'm not sure to understand why they close the reader just after starting the process.
To mirror what Penélope Stevens is saying, os.Pipe maps to an underlying os File pipe. By the time the *os.File returned by os.Pipe is closed it has already been passed to the new spawned process. The close will close the file descriptor in this process, but the spawned process can still read/write from that pipe.
The file descriptor is grabbed here: https://cs.opensource.google/go/go/+/refs/tags/go1.17.3:src/os/exec/exec.go;l=404-415;drc=refs%2Ftags%2Fgo1.17.3
And then passed to the spawned process with ProcAttr: https://pkg.go.dev/os#ProcAttr
My program normally uses the controlling terminal to read input from the user.
// GetCtty gets the file descriptor of the controlling terminal.
func GetCtty() (*os.File, error) {
return os.OpenFile("/dev/tty", os.O_RDONLY, 0)
}
I am currently constructing several times a s := bufio.NewScanner(GetCtty()) during the programm and read the input from s.Scan() with s.Text(). Which works nice.
However, for testing I am simulating the following input on stdin to my CLI Go-Program
echo -e "yes\nno\nyes\n" | app
This will not work correctly because the first construction of s and s.Scan() will already have buffered other test inputs which will not be available to a new construction by bufio.NewScanner and subsequent scan.
I am wondering how I can make sure that only one line is read from the stdin stream by s *bufio.Scanner or how I can mock my input to the controlling terminal.
I had several guesses but I am not sure if they work:
using only one bufio.Scanner in the whole program is a solution but I did not want to go this way...
write back the buffered data to GetCtty() with s.WriteTo(GetCtty()) (?) want work as the stuff gets appended instead of prepended on stdin?
Somehow only read a single line and do not consume more bytes, does that untimately mean to read not in chunks but byte by bytes (?)...
Use iotest.OneByteReader to disable buffering in the scanner:
s := bufio.NewScanner(iotest.OneByteReader(GetCtty()))
My situation is this: In MacOS/X, I've called AuthorizationExecuteWithPrivileges to spawn a privileged child process, and the only way I have to communicate with the child process is by calling fread() and/or fwrite() on the FILE * file-handle returned to me by the final argument to that call.
What I want to do is indicate to the child process that it should go away, which I can do by calling fclose() on the file-handle -- the child process sees that its STDIN_FILENO has closed and responds by exiting.
However, I also want to be able to read any text that the child process printed to its stdout stream before exiting, but calling fclose() on the file-handle precludes doing that.
So my question is, is there any way to "half-close" a FILE *, such that is becomes closed-for-writing but still-open-for-reading? I'm imagining something analogous to the shutdown(SHUT_WR) that can be used on a socket-descriptor.
I'm trying to run a console app and read/write it's standard i/o. The problem is that, when this app writes to the output via WriteFile(GetStdHandle(...)), I successfully read it's input with ReadFile on the pipe.
When the target app uses fprintf, then ReadFile blocks until the target app exits, in which case it returns the entire output at once. When the target app blocks (say, via fgets()), then ReadFile blocks.
I 'm using standard pipe redirection: http://msdn.microsoft.com/en-us/library/windows/desktop/ms682499(v=vs.85).aspx.
Why is that strange behaviour and how do I get around it?
It is likely due to the fact that fprintf is buffered while WriteFile is not. Can you use fflush after fprintf and try the same ?
As I understood fork() creates a child process by copying the image of the parent process.
My question is about how do child and parent processes share the stdout stream?
Can printf() function of one process be interrupted by other or not?
Which may cause the mixed output.
Or is the printf() function output atomic?
For example:
The first case:
parent: printf("Hello");
child: printf("World\n");
Console has: HeWollorld
The second case:
parent: printf("Hello");
child: printf("World\n");
Console has: HelolWorld
printf() is not guaranteed to be atomic. If you need atomicity, use write() with a string, preformatted using s*printf() etc., if needed. Even then, you should make the size of the data written using write() is not too big:
Write requests of {PIPE_BUF} bytes or less shall not be interleaved with data from other processes doing writes on the same pipe. Writes of greater than {PIPE_BUF} bytes may have data interleaved, on arbitrary boundaries, with writes by other processes, whether or not the O_NONBLOCK flag of the file status flags is set.
stdout is usually line-buffered. stderr is usually unbuffered.
The behavior of printf() may vary (depending on the exact details of your OS, C compiler, etc.). However, in general printf() is not atomic. Thus interleaving (as per your 1st case) can occur