Conceal a segmentation fault while keeping following commands aware of this incident - bash

This question asked about how to conceal a segmentation fault in a bash script and #yellowantphil provided a solution: pipe the output anywhere
Now I am looking through plenty of repositories handed in from my students. I need to check whether source codes in each repository could be compiled, and if so, whether the executable could work properly.
And I've observed that some of their executables end in failure with output 'segmentation fault'. Since I want to hide most details in my script, I prefer not showing any of this annoying output (and thus I found the question mentioned above). However, I still need to be aware that happens (to skip a loop). What should I do now?
A minimum reproduction of this problem:
Create any executable that causes 'segmentation fault'
Place it in a Bash script:
#!/bin/bash
./segfaultgen >/dev/null 2>&1 | :
echo $?
With that | : (mentioned in #yellowantphil's answer), the following sentence shows the output 0, which does not tell the truth. However error messages appear if | : is commented out. I've also tried appending || echo 1 before | :. It doesn't work as well :(

By default a pipeline only fails if the right-side fails. Enable pipefail so the pipeline will fail if either command fails.
(It's a good option in general. I enable by default it in all of my scripts.)
#!/bin/bash
set -o pipefail
./segfaultgen &>/dev/null | :
echo $?
Also, since you're using bash, &>/dev/null is shorter.

Related

How to ignore failure of command called through command builtin?

I have a shell script (running on macOS with GNU bash, version 3.2.57(1)-release) where I set -e at the beginning, but I also want to ignore some of the potential failures, so that they don't end the execution of the script. I'm doing that by appending || ... to the relevant commands:
#!/bin/sh
set -e
false || echo ignore failure
The above works and outputs ignore failure, as expected.
However, if I call the false command through the command builtin, this strategy doesn't work -- the following version of the script exits as soon as false fails, without printing anything:
#!/bin/sh
set -e
command false || echo ignore failure
Why is that? How can I get the desired behavior of ignoring the failure even in the second case?
(In this simplified example, I could of course just delete the command builtin, but in my actual use case, it's part of a function that I don't control.)
Why does command false || echo fail?
Seems like this is a bug in bash versions below 4.0.
I downloaded the old versions 3.2.57 and 4.0, compiled them on Linux, and ran your script. I could reproduce your problem in 3.2.57. In 4.0 everything worked as expected.
Strangely, I couldn't find an according note in bash's lists list of changes, but if you search for set -e you find multiple other bugfixes regarding the behavior of set -ein other versions, for instance:
This document details the changes between this version, bash-4.4-rc1, and
the previous version, bash-4.4-beta.
[...]
o. Fixed a bug that caused set -e to be honored in cases of builtins invoking other builtins when it should be ignored.
How to fix the problem?
The best way would be to use more recent version of bash. Even on macOS this shouldn't be a problem. You can compile it yourself or install it from something like brew.
Other than that, you can use workarounds like leaving out command or adding a subshell ( command false; ) || echo ignore failure (courtesy of Nate Eldredge). In either case, things get quite cumbersome. As you don't know when exactly the bug happens you cannot be sure that you correctly worked around it every time.

Can I stop later parts of a pipeline from running if an earlier part failed?

I have a piped command such as:
set -euxo pipefail
echo 'hello' | foo | touch example.sh
This is the output:
$ set -euxo pipefail
$ echo hello
$ foo
$ touch example.sh
pipefail.sh: line 4: foo: command not found
I thought set -e would cause the script to exit however. But even though foo is unrecognized, the script is still executing the touch command. How do I get it to exit if foo fails?
You can't really think of a pipeline of having "earlier" or "later" parts, except insofar as data moves through them from one end to the other: All parts of a pipeline run at the same time.
Consequently, you can't prevent later parts from starting if an earlier part failed, because the later part started at the same time the earlier part did.
The above being said, there are mechanisms to allow a pipeline to shut down early in the event of a failure -- mechanisms which work the same way without needing to set any non-default shell flags at all:
If you're using a tool designed to be used on the right-hand side of a pipeline (unlike touch), it will be reading from stdin -- and will thus see an early EOF should the components to the left of it fail.
If you're using a tool designed to be used on the left-hand side of a pipeline, it will receive a SIGPIPE when it attempts to write if the thing to the right of it is no longer running.
Of course, these mechanisms don't work if you're piping from a program that doesn't write to stdout, or into a program that doesn't read from stdin -- but such programs don't make much sense to use in pipelines anyhow.

Bash: How to create a test mode that displays commands instead of executing them

I have a bash script that executes a series of commands, some involving redirection. See cyrus-mark-ham-spam.
I want the script to have a test mode, where all the commands run are printed instead of executing them. As you can see, I have tried to do that by just putting "echo" on the front of each command in test mode.
Unfortunately this doesn't deal with redirection - any redirections are still done, so the program leaves lots of temp files littered about the place when run in test mode.
I have tried various ways to get round this, like quoting the whole command and passing it to a function that either prints it or runs it, but either the redirections work in test mode, or they don't work in run mode.
I thought this must have come up before, and wonder if there is a known solution which does not involve every command being repeated with an if TEST round the pair?
Please note, this is NOT a duplicate of show commands without executing them because neither that question, nor its answers, covers redirection (which is the essence of this question).
I see that it is not a duplicate but there is not general solution to this. You need to look at each command separately.
As long as the command doesn't use arguments enclosed in spaces, like
cmd -a -b -c > filename
, you can quote it:
echo 'cmd -a -b -c > filename'
But real life code is more complex, sure.

use "!" to execute commands with same parameter in a script

In a shell, I run following commands without problem,
ls -al
!ls
the second invocation to ls also list files with -al flag. However, when I put the above script to a bash script, complaints are thrown,
!ls, command not found.
how to realise the same effects in script?
You would need to turn on both command history and !-style history expansion in your script (both are off by default in non-interactive shells):
set -o history
set -o histexpand
The expanded command is also echoed to standard error, just like in an interactive shell. You can prevent that by turning on the histverify shell option (shopt -s histverify), but in a non-interactive shell, that seems to make the history expansion a null-op.
Well, I wanted to have this working as well, and I have to tell everybody that the set -o history ; set -o histexpand method will not work in bash 4.x. It's not meant to be used there, anyway, since there are better ways to accomplish this.
First of all, a rather trivial example, just wanting to execute history in a script:
(bash 4.x or higher ONLY)
#!/bin/bash -i
history
Short answer: it works!!
The spanking new -i option stands for interactive, and history will work. But for what purpose?
Quoting Michael H.'s comment from the OP:
"Although you can enable this, this is bad programming practice. It will make your scripts (...) hard to understand. There is a reason it is disabled by default. Why do you want to do this?"
Yes, why? What is the deeper sense of this?
Well, THERE IS, which I'm going to demonstrate in the follow-up section.
My history buffer has grown HUGE, while some of those lines are script one-liners, which I really would not want to retype every time. But sometimes, I also want to alter these lines a little, because I probably want to give a third parameter, whereas I had only needed two in total before.
So here's an ideal way of using the bash 4.0+ feature to invoke history:
$ history
(...)
<lots of lines>
(...)
1234 while IFS='whatever' read [[ $whatever -lt max ]]; do ... ; done < <(workfile.fil)
<25 more lines>
So 1234 from history is exactly the line we want. Surely, we could take the mouse and move there, chucking the whole line in the primary buffer? But we're on *NIX, so why can't we make our life a bit easier?
This is why I wrote the little script below. Again, this is for bash 4.0+ ONLY (but might be adapted for bash 3.x and older with the aforementioned set -o ... stuff...)
#!/bin/bash -i
[[ $1 == "" ]] || history | grep "^\s*$1" |
awk '{for (i=2; i<=NF; i++) printf $i" "}' | tr '\n' '\0'
If you save this as xselauto.sh for example, you may invoke
$ ./xselauto.sh 1234
and the contents of history line #1234 will be in your primary buffer, ready for re-use!
Now if anyone still says "this has no purpose AFAICS" or "who'd ever be needing this feature?" - OK, I won't care. But I would no longer want to live without this feature, as I'm just too lazy to retype complex lines every time. And I wouldn't want to touch the mouse for each marked line from history either, TBH. This is what xsel was written for.
BTW, the tr part of the pipe is a dirty hack which will prevent the command from being executed. For "dangerous" commands, it is extremely important to always leave the user a way to look before he/she hits the Enter key to execute it. You may omit it, but ... you have been warned.
P.S. This scriptlet is in fact a workaround, simulating !1234 typed on a bash shell. As I could never make the ! work directly in a script (echo would never let me reveal the contents of history line 1234), I worked around the problem by simply greping for the line I wanted to copy.
History expansion is part of the interactive command-line editing features of a shell, not part of the scripting language. It's not generally available in the context of a script, only when interacting with a (pseudo-)human operator. (pseudo meaning that it can be made to work with things like expect or other keystroke repeating automation tools that generally try to play act a human, not implying that any particular operator might be sub-human or anything).

Log invoked commands of make

Is there a way to log the commands, make invokes to compile a program? I know of the parameters -n and -p, but they either don't resolve if-conditions but just print them out. Or they don't work, when there are calls to 'make' itself in the Makefile.
This
make SHELL="sh -x -e"
will cause the shell (which make invokes to evaluate shell constructs) to print information about what it's doing, letting you see how any conditionals in shell commands are being evaluated.
The -e is necessary to ensure that errors in a Makefile target will be properly detected and a non-zero process exit code will be returned.
You could try to log execve calls with strace
strace -f -e execve make ...
Make writes each command it executes to the console, so
make 2>&1 | tee build.log
will create a log file named build.log as a side effect which contains the same stuff written to the screen. (man tee for more details.)
2>&1 combines standard output and errors into one stream. If you didn't include that, regular output would go into the log file but errors would only go to the console. (make only writes to stderr when a command returns an error code.)
If you want to suppress output entirely in favor of logging to a file, it's even simpler:
make 2>&1 > build.log
Because these just capture console output they work just fine with recursive make.
You might find what you're looking for in the annotated build logs produced by SparkBuild. That includes the commands of every rule executed in the build, whether or not "#" was used to prevent make from printing the command-line.
Your comment about if-conditions is a bit confusing though: are you talking about shell constructs, or make constructs? If you mean shell constructs, I don't think there's any way for you to get exactly what you're after except by using strace as others described. If you mean make constructs, then the output you see is the result of the resolved conditional expression.
Have you tried with the -d parameter (debug)?
Note that you can control the amount of infos with --debug instead. For instance, --debug=a (same as -d), or --debug=b to show only basic infos...

Resources