Redirection standard error & ouput streams is postponed - windows

I need to redirect output & error streams from one Windows process (GNU make.exe executing armcc toolchain) to some filter written on perl. The command I am running is:
Make Release 2>&1 | c:\cygwin\bin\perl ../tools/armfilt.pl
The compilation process throws out some prints which should be put then to STDOUT after some modifications. But I encountered a problem: all prints generated by the make are actually postponed till end of the make's process and only then are shown to a user. So, my questions are:
Why has it happen? I have tried to change the second process (perl.exe) priority from "Normal" to "Above normal" but it didn't help...
How to overcome this problem?
I think that one of possible workarounds may be to send only STDERR prints to the perl (that is what I actually need), not STDOUT+STDERR. But I don't know how to do it in Windows.
The Microsoft explanation concerning pipe operator usage says:
The pipe operator (|) takes the output (by default, STDOUT) of one
command and directs it into the input (by default, STDIN) of another
command.
But how to change this default STDOUT piping is not explained. Is it possible at all?

Related

How to pass stderr to a command stream, then back to the terminal?

I'm using bash, but perhaps most shells behave similarly in this regard. If not, then my question pertains to bash.
There's a regularly used command that always issues a spurious error message (to stderr), but MAY sometimes issue error messages that are important. I figured I could pipe stderr to grep, then use -v option to filter the offending line that's otherwise noise. Whatever passes through the filter on stderr should go right back to the original destination (presumably the user's terminal). How do I do this?
(Getting the source and editing it to make a custom version that doesn't spit out that error is obviously possible but out of the question for practical reasons.)
Output grep output to stderr.
thecommand 2> >(grep -v 'something' >&2)

Shell IO redirection order, pipe version

I have seen this question:
Shell redirection i/o order.
But I have another question. If this line fails to redirect stderr to file:
ls -xy 2>&1 1>file
Then why this line can redirect stderr to grep?
ls -xy 2>&1 | grep ls
I want to know how it is actually being run underneath.
It is said that 2>&1 redirects stderr to a copy of stdout. What does "a copy of stdout" mean? What is actually being copied?
The terminal registers itself (through the OS) for sending and receiving through the standard streams of the processes it creates, right? Does the other redirections go through the OS as well (I don't think so, as the terminal can handle this itself)?
The pipe redirection (connecting standard output of one command to the stdin of the next) happens before the redirection performed by the command.
That means by the time 2>&1 happens, the stdout of ls is already setup to connect to stdin of grep.
See the man page of bash:
Pipelines
The standard output of command is connected via a pipe to
thestandard input of command2. This connection is performed before
anyredirections specified by the command (see REDIRECTION below). If
|&is used, command's standard error, in addition to its
standardoutput, is connected to command2's standard input through the
pipe;it is shorthand for 2>&1 |. This implicit redirection of
thestandard error to the standard output is performed after
anyredirections specified by the command.
(emphasis mine).
Whereas in the former case (ls -xy 2>&1 1>file), nothing like that happens i.e. when 2>&1 is performed the stdout of ls is still connected to the terminal (and hasn't yet been redirected to the file).
That answers my first question. What about the others?
Well, your second question has already been answered in the comments. (What is being duplicated is a file descriptor).
As to your last question(s),
The terminal registers itself (through the OS) for sending and receiving through the standard streams of the processes it creates, right? Does the other redirections go through the OS as well (I don't think so, as the terminal can handle this itself)?
it is the shell which attaches the standard streams of the processes it creates (pipes first, then <>’s, as you have just learned). In the default case, it attaches them to its own streams, which might be attached to a tty, with which you can interact in a number of ways, usually a terminal emulation window, or a serial console, whatever. Terminal is a very ambiguous word.

fastest command, which accepts pipe, but generates no text on StdOut & StdErr

Which command in Windows command script (.cmd) accepts pipe (so, no error "The Process tried to write to a nonexistent pipe." generated), but generates no output itself, including output to StdErr? I need to not touch normal StdErr output (keep in mind, pipe transports only StdOut). I can't use null device, due to it's not installed in the system.
For example, command|rem generates mentioned error. But I want no error output, except generated by command, so rem is not suitable command for needed purpose.
Main aim is the script speed. So, don't offer extensive constructions, please.
should be break(or set/p= ?) as it is internal command prompt command (i.e. no external process started ) and generally do nothing.Though I suppose if you are searching for executable packed with the windows answer will be different.
The cd . command is what you are looking for. I used it to create empty files. This way, command | cd . works and echo Hello >&2 | cd . show "Hello" in the screen! This works as a filter that just blocks Stdout (interesting).

Bash script - run process & send to background if good, or else

I need to start up a Golang web server and leave it running in the background from a bash script. If the script in question in syntactically correct (as it will be most of the time) this is simply a matter of issuing a
go run /path/to/index.go &
However, I have to allow for the possibility that index.go is somehow erroneous. I should explain that in Golang this for something as "trival" as importing a module that you then fail to use. In this case the go run /path/to/index.go bit will return an error message. In the terminal this would be something along the lines of
index.go:4:10: expected...
What I need to be able to do is to somehow change that command above so I can funnel any error messages into a file for examination at a later stage. I tried variants on go run /path/to/index.go >> errors.txt with the terminating & in different positions but to no avail.
I suspect that there is a bash way to do this by altering the priority of evaluation of the command via some judiciously used braces/brackets etc. However, that is way beyond my bash capabilities. I would be most obliged to anyone who might be able to help.
Update
A few minutes later... After a few more experiments I have found that this works
go run /path/to/index.go &> errors.txt &
Quite apart from the fact that I don't in fact understand why it works there remains the issue that it produces a 0 byte errors.txt file when the command goes to completion without Golang throwing up any error messages. Can someone shed light on what is going on and how it might be improved?
Taken from man bash.
Redirecting Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be redirected to the file whose name is the expansion of word.
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
Appending Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be appended to the file whose name is the expansion of word.
The format for appending standard output and standard error is:
&>>word
This is semantically equivalent to
>>word 2>&1
Narūnas K's answer covers why the &> redirection works.
The reason why the file is created anyway is because the shell creates the file before it even runs the command in question.
You can see this by trying no-such-command > file.out and seeing that even though the shell errors because no-such-command doesn't exist the file gets created (using &> on that test will get the shell's error in the file).
This is why you can't do things like sed 'pattern' file > file to edit a file in place.

Bash stderr and stdout redirection failing

I have a FORTRAN program output I want to redirect to file. I've done this before and use
$myprog.out>>out.txt 2>&1
and for some reason this is not working. I test it with a another simple test program
$myprog.test>>out.txt 2>&1
and it works
I run myprog.out and the output goes to screen as usual but redirecting it seems to fail! It was working and now seems to have stopped working. It's very strange. I commented out a few print statements that I no longer wanted, recompiled and then band redirect does not work.
There is clearly something different going on with my output but how to diagnose where it is going?
You probably need to flush your output. See for example this SO topic. How to do that depends on your compiler I guess. Because only Fortran 2003 Standard includes flush() statement and the ability to determine numbers that corresponds to stdout/stderr units.
However in gfortran (for example) you can use flush() intrinsic procedure with the equivalents of Unix file descriptors: UNIT=5 for stdin, UNIT=6 for stdout and UNIT=0 for stderr.
PROGRAM main
PRINT *, "Hello!"
CALL flush(6)
CALL flush(0)
END PROGRAM main
With >> you are appending the output of your program to out.txt every time you run it.
Can you just try scrolling to the end of out.txt and see if your output is there?

Resources