How to write bash script to test if program crashes? - bash

I have a program x which sometimes crashes on certain input files.
How do I write a bash script that returns the following?
0 if the program x terminates fine or runs for longer than 1/20th of a second
1 if the program x segfaults
Note that the program will segfault or run forever, so I need to stop it somehow with the script. can you show me please
Thank you for any ideas

Most of the programs when they do not terminate correctly return 0. That information can be gleaned from the bash variable $?. So, after you run the program, check if $? is 0. If it is, the program ran successfully. Otherwise, there was a problem.
This is, of course, assuming that the program is following proper conventions.

echo $? should let you know whether or not the program succeeded.
http://www.devshed.com/c/a/BrainDump/Executing-Commands-with-bash/1/

Related

start bash with last exit code of 0

When I start a new bash shell, if I run the command echo $? as the first thing, I get 1. How can I run bash with the "default" exit code being 0?
Context: I am running msys2 in a terminal window in VS Code. If I start msys2, and then realize I didn't need a shell now and just type exit, bash exits with 1, causing VS Code to pop up an annoying warning.
Most likely something in your profile is failing and setting the status code to 1. Since status codes are overwritten by each process that runs, it'll probably be something towards the end.

BASH: error in C program that runs in subshell breaks the main shell

I have a bash script that runs a list of small programs mostly written in C and Python, since the programs themselves are NOT bug free and they might crash or run into infinite loop, so in the BASH script, I run the programs in subshell so it won't break the main shell, here is what it likes:
#!/usr/bin/env bash
set -e
for py_p in "${py_program_list[#]}"; do
(python "$py_p") || echo "terminated!"
done
for c_p in "${c_program_list[#]}"; do
("$c_p") || echo "terminated!"
done
The problem is, when loops in python programs, the bash script won't be affected by any error in the python program which is what I expected. However, the bash script exit immediately if any C program exit with error.
UPDATE:
I am using BASH 3.2 in OSX 10.9.5
UPDATE 2:
Updated the question to make it more clear, sorry for the confusion. The problem I have is about the C program, the python part confirm the error in subshell won't affect the main shell but the C program breaks the rule.
the Python scripts are fine, no matter I use Ctrl + C or they crash
for some reason, they won't stop the main shell from running which is
what I expect. But the C programs don't, type Ctrl + C when a C
program is running will exit the bash script.
Python handles the interrupt signal itself (outputting Traceback …KeyboardInterrupt) and then terminates normally, returning the exit status 1 to bash.
Your C programs evidently don't handle the signal, so the default action is taken, to terminate the process; bash is informed that the program was terminated by signal SIGINT.
Now bash behaves differently depending on the kind of the child program's termination (normal or signaled): In the first case, it continues execution with || echo "terminated!", in the second case, it terminates itself, as you observed.
You can change that behavior by trapping the signal in your script, e. g. by inserting
trap "echo interrupted" INT
somewhere before the for c_p loop.
Everything depends on the Python programs exit status. Maybe they return the same value regardless they fact their execution was successful or not. So... basically, you cannot rely on their exit status.

fish shell evaluate make return code

I'm trying to write a script in Fish that runs a Make recipe and then executes all of the resultant binaries. The problem I'm having is that I would like to have the script exit with an error code if the make command encounters an error. Whenever I try to capture Make's return value, I end up with its output log instead.
For example:
if test (make allUnitTests) -eq 0
echo "success"
else
echo "fail"
end
returns an error because "test" is seeing the build process, not the terminating condition.
I wrote the script so that I could easily make Jenkins run all my unit tests whenever I trigger a build. Since I haven't been able to get the above section of the script working correctly, I've instead instructed Jenkins to run the make command as a separate command, which does exactly what I want: halting the entire build process without executing any binaries if anything fails to compile. Thus, at this point my question is more of an academic exercise, but I would like to add building the unit test binaries into the script (and have it cleanly terminate on a build error) for the benefit of any humans who might check out the code and would like to run the unit tests.
I played a little with something like:
if test (count (make allUnitTests | grep "Stop")) -eq 0
but this has two problems:
I'm apparently piping stdout when I need to pipe stderr. (Come to think of it, if I could just check to see if anything was written to stderr, then I wouldn't need grep at all.)
Grep is swallowing all the log data piped to it, which I really want to be visible on the console.
You are misunderstanding the parentheses - these run a command substitution. What this does is capture the output of the process running in the substitution, which it will then use as arguments (separated by newlines by default) to the process outside.
This means your test will receive the full output of make.
What you instead want to do is just run if make allUnitTests without any parens, since you are just interested in the return value.
If you would like to do something between running make and checking its return value, the "$status" variable always contains the return value of the last command, so you can save that:
make allUnitTests
set -l makestatus $status
# Do something else
if test $makestatus -eq 0
# Do the if-thing
else
# Do the else-thing
end

Using exec to relaunch crashing binary

Sorry, the question is pretty vague, but I hope someone still can help.
As I understand exec bash command, it replaces the code segment with what is specified by an argument. Practically replace the running script with something else.
But I am pretty sure I saw people using exec(not fork) in a loop to relaunch executable if it crashes or just exits with non-zero exit code. Unfortunately I can't find that piece of code now. Is it at all possible or am I imagining things?
I don't know specifically what you saw, but there are conceivable ways of using exec in a loop to launch and relaunch a process, e.g.
while true
do
( unset DISPLAY && exec ./myfile )
done
The ( .. ) here is an explicit subshell, so there is a fork even if it's not obvious.
Other conceivable reasons for putting exec in a loop include trying to exec different files or different paths, until you find one that works or the file is created or becomes available.
However, there is no way to successfully exec a process without any kind of implicit or explicit fork, and then loop around to exec itself again (unless the script ends up execing itself in a recursive way).
This is actually a more common problem than you'd think.
In the past, I've always implement a bash script to monitor if the process is there, and if it's not, restart it.
Here are some solutions that could work for you:
https://serverfault.com/questions/52976/simple-way-of-restarting-crashed-processes

Exiting a program in ruby

I'm writing some code in ruby, and I want to test for the presence of a command before the launch of the program. If the command isn't installed, I want to display an error message and quit the program. So right now, I'm doing this.
puts `type -P spark &>/dev/null && continue || { echo "You must install spark"; exit 0; } `
So, everything works fine, BUT, the "exit 0" isn't, and I can't figure out why.
Do you have any idea to fix this? Or even better, is there another way to do it?
The reason you're not exiting your script is that the call to exit is within the backticks. It's exiting the subshell called to run spark, but that's not the process interpreting your ruby script.
You could check the contents of the $? variable, which returns Process:Status for the backtick command after the command has been run.
As Daniel Pittman has suggested, however, it would be easier to check that the executable was available using something like FileTest. However, you probably want to couple that with a test of the return value, in case some other, more complex, failure occurs.
The much better way to do that is:
ENV["PATH"].split(':').any? {|x| FileTest.executable? "#{x}/spark" }
Season to taste for getting the full path, or using File.join to build the path, or platform path separators, or whatever.

Resources