stdout and stderr are out-of-order when a bash script is run from Atom - bash

I tried to write a little program to list a non existing directory and echo done, in a .sh file:
#!/bin/bash
ls notexist
echo 'done'
But my console outputs done on the first line, before the error message to list the nonexisting directory:
done
ls: notexist: No such file or directory
I don't think bash creates a thread automatically for each line of code, does it? I'm using terminal in macOS Big Sur.
Edit: I'm accessing terminal indirectly from the script package of the Atom text editor in macOS Big Sur. The error goes away if I run code directly in console via ./file.sh.

If we look at the source code to the Atom script plugin, the problem becomes clear:
It creates a BufferedProcess with separate stdout and stderr callbacks (using them, among other things, to determine whether any output has been written to each of these streams).
Implementing this requires stdout and stderr to be directed to different FIFOs. This means that, unlike a typical terminal where there's an absolute ordering of which content was written to the single FIFO shared by both stdout and stderr at the same time, there's no strict guarantee that content will be processed through those functions in the same order it was written.
As a workaround, you can exec 2>&1 into your script to put all content on stdout, or exec >&2 to put all content on stderr. Ideally, if the script plugin doesn't need to track the two streams separately, it would do this itself, and put a callback only on the single stream to which all content has been redirected.

Related

Why go programs print output to terminal screen but not /dev/stderr?

As I see in the source of golang
go will print output to os.Stderr which is
Stderr = NewFile(uintptr(syscall.Stderr), "/dev/stderr")
So why I run this program in my terminal with the command go run main.go
the output is printed to the terminal screen, not the /dev/stderr
// main.go
func main() {
log.Println("this is my first log")
}
In standard Unix/Linux terminals, both stdout and stderr are connected to the terminal so the output goes there.
Here's a shell snippet to clarify this:
$ echo "joe" >> /dev/stderr
joe
Even though we echoed "joe" to something that looks like a file, it gets emitted to the screen. Replace /dev/stderr with /tmp/foo and you won't see the output on the screen (though it will be appended to the file /tmp/foo)
In Go you can specifically select which stream to output to by passing it to functions like fmt.Fprintf in its first argument.
Well, several things are going on here.
First, is that on a UNIX-like system (and you appear to be on a Linux-based one), the environment in which each user-space program runs, includes the concept of the so-called "standard I/O streams" — that is, each program bootstrapped by the OS and taken control automatically has three file descriptors opened and available: representing the standard input stream, the standard output stream and the standard error stream.
Second, typically (but not always) the spawned program inherits these streams from its parent program. For the case of an interactive shell running in a terminal (or a terminal emulator), that parent program is the shell, and so the standard I/O streams of the spawned program are inherited from the shell.
The shell's standard I/O streams, in turn, naturally connected to the terminal it runs at: that's why it's possible to input data to the shell and read what it prints back: you actually type into the terminal, not to the shell; it's the terminal which delivers that data to the shell; the case for the shell's output is just the reverse.
Third, that /dev/stderr is a Linux-specific "hack" which is a virtual device meaning "whatever my stderr is connected to".
That is, when a process opens that special file, it gets back a file descriptor connected to whatever the process' stderr is already connected to.
Fourth, let's grok the code example you cited:
NewFile(uintptr(syscall.Stderr), "/dev/stderr")
Here, a call to os.NewFile is made, receiving two arguments.
To cite it's documentation:
$ go doc os.NewFile
func NewFile(fd uintptr, name string) *File
NewFile returns a new File with the given file descriptor and name. The returned value will be nil if fd is not a valid file descriptor.
<…>
OK, so this function takes a raw kernel-level
file descriptor
and a name of the file it is supposed to have been opened to.
That latter bit is crucial: the OS kernel itself is (almost) oblivious about what sort of stream a file descriptor actually represents — at least as long as its public API is considered.
So, when NewFile is called to obtain an instance of *os.File for the program's standard error stream by the log package,
it does not open the file "/dev/stderr" (even though it exists);
it merely uses it's name since os.NewFile requests it.
It could have used "" there to much the same extent except for changes in error reporting: if something fails when using the resulting *os.File, the error output would not have included the name "/dev/stderr".
The syscall.Stderr value is merely the number of the file descriptor connected to the standard error stream.
On UNIX-compatible kernels it's always 2; you can run go doc syscall.Stderr and see for yourself.
To recap,
The call NewFile(...) you referred to does not open any files;
it merely wraps an already open file descriptor connected to the standard error stream of the current process into a value of type os.File which is used throughout the os package for I/O on files.
On Linux, the special virtual device file /dev/stderr does really exist but it has nothing to do with what's happening here.
When you run a program in an interactive shell without using any I/O redirection, the standard streams of the created process are connected to the same "sinks and sources" as those of the shell. And they, in turn, are most of the time connected to the terminal which hosts the shell.
Now I urge you to fetch an introductory book on the design of UNIX-like operating systems and read it.

Shell IO redirection order, pipe version

I have seen this question:
Shell redirection i/o order.
But I have another question. If this line fails to redirect stderr to file:
ls -xy 2>&1 1>file
Then why this line can redirect stderr to grep?
ls -xy 2>&1 | grep ls
I want to know how it is actually being run underneath.
It is said that 2>&1 redirects stderr to a copy of stdout. What does "a copy of stdout" mean? What is actually being copied?
The terminal registers itself (through the OS) for sending and receiving through the standard streams of the processes it creates, right? Does the other redirections go through the OS as well (I don't think so, as the terminal can handle this itself)?
The pipe redirection (connecting standard output of one command to the stdin of the next) happens before the redirection performed by the command.
That means by the time 2>&1 happens, the stdout of ls is already setup to connect to stdin of grep.
See the man page of bash:
Pipelines
The standard output of command is connected via a pipe to
thestandard input of command2. This connection is performed before
anyredirections specified by the command (see REDIRECTION below). If
|&is used, command's standard error, in addition to its
standardoutput, is connected to command2's standard input through the
pipe;it is shorthand for 2>&1 |. This implicit redirection of
thestandard error to the standard output is performed after
anyredirections specified by the command.
(emphasis mine).
Whereas in the former case (ls -xy 2>&1 1>file), nothing like that happens i.e. when 2>&1 is performed the stdout of ls is still connected to the terminal (and hasn't yet been redirected to the file).
That answers my first question. What about the others?
Well, your second question has already been answered in the comments. (What is being duplicated is a file descriptor).
As to your last question(s),
The terminal registers itself (through the OS) for sending and receiving through the standard streams of the processes it creates, right? Does the other redirections go through the OS as well (I don't think so, as the terminal can handle this itself)?
it is the shell which attaches the standard streams of the processes it creates (pipes first, then <>’s, as you have just learned). In the default case, it attaches them to its own streams, which might be attached to a tty, with which you can interact in a number of ways, usually a terminal emulation window, or a serial console, whatever. Terminal is a very ambiguous word.

fastest command, which accepts pipe, but generates no text on StdOut & StdErr

Which command in Windows command script (.cmd) accepts pipe (so, no error "The Process tried to write to a nonexistent pipe." generated), but generates no output itself, including output to StdErr? I need to not touch normal StdErr output (keep in mind, pipe transports only StdOut). I can't use null device, due to it's not installed in the system.
For example, command|rem generates mentioned error. But I want no error output, except generated by command, so rem is not suitable command for needed purpose.
Main aim is the script speed. So, don't offer extensive constructions, please.
should be break(or set/p= ?) as it is internal command prompt command (i.e. no external process started ) and generally do nothing.Though I suppose if you are searching for executable packed with the windows answer will be different.
The cd . command is what you are looking for. I used it to create empty files. This way, command | cd . works and echo Hello >&2 | cd . show "Hello" in the screen! This works as a filter that just blocks Stdout (interesting).

Does stdout get stored somewhere in the filesystem or in memory? [duplicate]

This question already has answers here:
Send output of last command to a file automatically in bash?
(3 answers)
Closed 8 years ago.
I know I can save the result of a command to a variable using last_output=$(my_cmd) but what I'd really want is for $last_output to get updated every time I run a command. Is there a variable, zsh module, or plugin that I could install?
I guess my question is does stdout get permanently written somewhere (at least before the next command)? That way I could manipulate the results of the previous command without having to re-run it. This would be really useful for commands that take a long time to run
If you run the following:
exec > >(tee save.txt)
# ... stuff here...
exec >/dev/tty
...then your stdout for everything run between the two commands will go both to stdout, and to save.txt.
You could, of course, write a shell function which does this for you:
with_saved_output() {
"$#" \
2> >(tee "$HOME/.last-command.err >&2) \
| tee "$HOME/.last-command.out"
}
...and then use it at will:
with_saved_output some-command-here
...and zsh almost certainly will provide a mechanism to wrap interactively-entered commands. (In bash, which I can speak to more directly, you could do the same thing with a DEBUG trap).
However, even though you can, you shouldn't do this: When you split stdout and stderr into two streams, information about the exact ordering of writes is lost, even if those streams are recombined later.
Thus, the output
O: this is written to stdout first
E: this is written to stderr second
could become:
E: this is written to stderr second
O: this is written to stdout first
when these streams are individually passed through tee subprocesses to have copies written to disk. There are also buffering concerns created, and differences in behavior caused by software which checks whether it's outputting to a TTY and changes its behavior (for instance, software which turns color-coded output on when writing directly to console, and off when writing to a file or pipeline).
stdout is just a file handle that by default is connected to the console, but could be redirected.
yourcommand > save.txt
If you want to display the output to the console and save it to a file at the same time yout could pipe the output to tee, a command that writes everything it receives on stdin to stdout and to a file of your choice:
yourcommand | tee save.txt

How to get error output and store it in a variable or file

I'm having a little trouble figuring out how to get error output and store it in a variable or file in ksh. So in my script I have cp -p source.file destination inside a while loop.
When I get the below error
cp: source.file: The file access permissions do not allow the specified action.
I want to grab it and store it in a variable or file.
Thanks
You can redirect the error output of the command like so:
cp -p source.file destination 2>> my_log.txt
It will append the error message to the my_log.txt file.
In case you want a variable you can redirect stderr to stdout and assign the command output to a variable:
my_error_var=$(cp -p source.file destination 2>&1)
In ksh (as per Q), as in bash and other sh derivatives, you can get all/just stderr output from cp using redirection, then grabbing in a var (using $(), better than backtick if using a vaguely recent version):
output=$(cp -p source.file destination 2>&1)
cp doesn't normally output anything though this would capture stdout and stderr; to capture just stderr this way, use 1>/dev/null also. The other solutions redirecting to a file could use cat/various other commands to output/process the logfile.
Reason why I don't suggest using outputting to temporary files:
Redirecting to a file then reading that in (via read command or more inefficiently via $(cat file) ), particularly for just a single line, is less efficient and slower; though not so bad if you want to append to it each time for multiple operations before displaying the errors. You'll also leave the temporary file around unless you ALWAYS clear it up, don't forget when people interrupt (ie. Ctrl-C) or kill the script.
Using temporary files also could be a problem if the script is run multiple times at once (eg. could happen via cron if filesystem/other delays cause massive overruns or just from multiple users), unless the temporary filename is unique.
Generating temporary files is also a security risk unless done very carefully, especially if the file data is processed again or the contents could be rewritten before display by something else to confuse/phish the user/break the script. Don't get into a habit of doing it too casually, read up on temporary files (eg. mktemp) first via other questions here/google.
You can do STDERR redirects by doing:
command 2> /path/to/file.txt

Resources