How to detect if there was a error in ghostscript - ghostscript

I have a ghostscript command, when the file I input is wrong, ghostscript fail and print a error message.
So far so good, I do a simple script apply the command on multiple files, and tell me how many files failed.
But ghostscript print error on stdout, and nothing to stdout.
While searching, I found the -sstdout flag, but in that case, everything goes to stderr and nothing to stdout.
Is there a way to simply and programatically tell if ghostscript encountered a error?
(A ugly workaround would be to search for 'error' in stdout, but that s just plain bad).
Is there a way to tell ghostscript to use stdout and stderr like thoses are supposed to be used? For separate standard and error output?

I ve found another workaround, since I don t need stdout, I added the flags -q -sstdout=%stderr.
-q suspress all non error message (in my case)
-sstdout=%stderr redirect stdout to stderr, in my case it mean the error messages.

Related

How to pass stderr to a command stream, then back to the terminal?

I'm using bash, but perhaps most shells behave similarly in this regard. If not, then my question pertains to bash.
There's a regularly used command that always issues a spurious error message (to stderr), but MAY sometimes issue error messages that are important. I figured I could pipe stderr to grep, then use -v option to filter the offending line that's otherwise noise. Whatever passes through the filter on stderr should go right back to the original destination (presumably the user's terminal). How do I do this?
(Getting the source and editing it to make a custom version that doesn't spit out that error is obviously possible but out of the question for practical reasons.)
Output grep output to stderr.
thecommand 2> >(grep -v 'something' >&2)

Bash script - run process & send to background if good, or else

I need to start up a Golang web server and leave it running in the background from a bash script. If the script in question in syntactically correct (as it will be most of the time) this is simply a matter of issuing a
go run /path/to/index.go &
However, I have to allow for the possibility that index.go is somehow erroneous. I should explain that in Golang this for something as "trival" as importing a module that you then fail to use. In this case the go run /path/to/index.go bit will return an error message. In the terminal this would be something along the lines of
index.go:4:10: expected...
What I need to be able to do is to somehow change that command above so I can funnel any error messages into a file for examination at a later stage. I tried variants on go run /path/to/index.go >> errors.txt with the terminating & in different positions but to no avail.
I suspect that there is a bash way to do this by altering the priority of evaluation of the command via some judiciously used braces/brackets etc. However, that is way beyond my bash capabilities. I would be most obliged to anyone who might be able to help.
Update
A few minutes later... After a few more experiments I have found that this works
go run /path/to/index.go &> errors.txt &
Quite apart from the fact that I don't in fact understand why it works there remains the issue that it produces a 0 byte errors.txt file when the command goes to completion without Golang throwing up any error messages. Can someone shed light on what is going on and how it might be improved?
Taken from man bash.
Redirecting Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be redirected to the file whose name is the expansion of word.
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
Appending Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be appended to the file whose name is the expansion of word.
The format for appending standard output and standard error is:
&>>word
This is semantically equivalent to
>>word 2>&1
Narūnas K's answer covers why the &> redirection works.
The reason why the file is created anyway is because the shell creates the file before it even runs the command in question.
You can see this by trying no-such-command > file.out and seeing that even though the shell errors because no-such-command doesn't exist the file gets created (using &> on that test will get the shell's error in the file).
This is why you can't do things like sed 'pattern' file > file to edit a file in place.

Redirection standard error & ouput streams is postponed

I need to redirect output & error streams from one Windows process (GNU make.exe executing armcc toolchain) to some filter written on perl. The command I am running is:
Make Release 2>&1 | c:\cygwin\bin\perl ../tools/armfilt.pl
The compilation process throws out some prints which should be put then to STDOUT after some modifications. But I encountered a problem: all prints generated by the make are actually postponed till end of the make's process and only then are shown to a user. So, my questions are:
Why has it happen? I have tried to change the second process (perl.exe) priority from "Normal" to "Above normal" but it didn't help...
How to overcome this problem?
I think that one of possible workarounds may be to send only STDERR prints to the perl (that is what I actually need), not STDOUT+STDERR. But I don't know how to do it in Windows.
The Microsoft explanation concerning pipe operator usage says:
The pipe operator (|) takes the output (by default, STDOUT) of one
command and directs it into the input (by default, STDIN) of another
command.
But how to change this default STDOUT piping is not explained. Is it possible at all?

Error log of make command in Linux

I am compiling a kernel module and it has many compilation errors in it. After running "make", the errors thrown out are too many to fit in the screen. Scrolling up doesn't reach the first error. I tried capturing the errors by doing make &2 > log which didn't work (log file was empty and the error messages were still dumped on screen).
Can someone please tell me how to go about logging all the messages generated during compilation/make into a logfile?
If you want to watch it scroll past, too:
make 2>&1 | tee log
(/bin/sh, bash and related) This sends the standard error to the same place as the standard output, then pipes them through tee to capture the result and still get screen action.
Try doing:
make >&log
the & after the > tells the shell to dump both stdout and stderr to the log. This can also be used with pipes.

'app --help' should go to stdout or stderr?

I think stdout, so you can easily grep, what do you think?
Only errors go to stderr. This is in no way an error, it does exactly what the user had in mind, which is print usage information.
Always stdout, makes it easier to pipe to less, grep it etc.
If you are showing the help text because there was a problem with parsing the command line arguments, then you might use stderr.
Well, it's an explicit request for help so it's output. If for some reason you can't output the help or the user mis-spells "help" then, by all means, send that to error :-)
Users that know what they're doing can use the infamous "2>&1" if they want errors on standard output.
It's not an error, so I'd say stdout....
netcat is the only application I can think of that would redirect -h to stderr, and I can't for the life of me fathom why.
I suppose if you're outputting the help information because someone used improper arguments, you might want to redirect it to stderr, but personally even then I wouldn't use stderr because I don't think spamming error logs with fullblown help text is useful - I'd rather just output a single error pointing out the arguments were malformed to stderr. If someone is explicitly calling your application using -h or --help, then you really shouldn't redirect it to stderr.

Resources