I'm writing some code in ruby, and I want to test for the presence of a command before the launch of the program. If the command isn't installed, I want to display an error message and quit the program. So right now, I'm doing this.
puts `type -P spark &>/dev/null && continue || { echo "You must install spark"; exit 0; } `
So, everything works fine, BUT, the "exit 0" isn't, and I can't figure out why.
Do you have any idea to fix this? Or even better, is there another way to do it?
The reason you're not exiting your script is that the call to exit is within the backticks. It's exiting the subshell called to run spark, but that's not the process interpreting your ruby script.
You could check the contents of the $? variable, which returns Process:Status for the backtick command after the command has been run.
As Daniel Pittman has suggested, however, it would be easier to check that the executable was available using something like FileTest. However, you probably want to couple that with a test of the return value, in case some other, more complex, failure occurs.
The much better way to do that is:
ENV["PATH"].split(':').any? {|x| FileTest.executable? "#{x}/spark" }
Season to taste for getting the full path, or using File.join to build the path, or platform path separators, or whatever.
Related
This question is motivated by Jenkins jobs, and their Execute shell build step. Jenkins by default calls sh with -x switch, which results in echoing the commands it executes. This is definitely good and desired behaviour. However, it would be very nice to be able to just print messages nicely, in addition to having the set -x in effect. An example follows:
If there is echo Next we fix a SNAFU in the build step script, console output of the build will have
+ echo Next we fix a SNAFU
Next we fix a SNAFU
It would be much nicer to have just single line,
Next we fix a SNAFU
How to achieve this? Solution is ideally general sh solution, but Jenkins-specific solution is fine too. And solution should be quite nice looking in the shell script source too, as the dual purpose of the echoes is to both document the script, and make the output more clear.
So just surrounding every echo like that with
set +x
echo Message
set -x
is not very good, because it will print + set +x before every message, and it also takes up 3 lines in the script.
set +x
<command>
set -x
This will disable the printing of the <command>
I found two solutions, neither is ideal but both are kind of ok.
I like better this first one. Instead of having echo Message, have
true Message
It will display
+ true Message
This works, because true ignores its command line arguments (mostly). Downside is the + true clutter before message, and possibly confusing use of true command for others who read the script later. Also, if echo is turned off, then this will not display the message.
Another way is to do echo like this:
( set +x;echo Message )
which will print
+ set +x
Message
This works, because commands in () are executed in a subshell, so changes like set don't affect the parent shell. Downside of this is, it makes the script a bit ugly and more redious to write, and there's an extra line of output. Also, this spawns an extra subshell, which might affect build performance slightly (especially if building under Cygwin on Windows). Positive is, it will also print message if echo is off, and perhaps intention is immediately clear to those who know shell scripting.
I'm trying to write a script in Fish that runs a Make recipe and then executes all of the resultant binaries. The problem I'm having is that I would like to have the script exit with an error code if the make command encounters an error. Whenever I try to capture Make's return value, I end up with its output log instead.
For example:
if test (make allUnitTests) -eq 0
echo "success"
else
echo "fail"
end
returns an error because "test" is seeing the build process, not the terminating condition.
I wrote the script so that I could easily make Jenkins run all my unit tests whenever I trigger a build. Since I haven't been able to get the above section of the script working correctly, I've instead instructed Jenkins to run the make command as a separate command, which does exactly what I want: halting the entire build process without executing any binaries if anything fails to compile. Thus, at this point my question is more of an academic exercise, but I would like to add building the unit test binaries into the script (and have it cleanly terminate on a build error) for the benefit of any humans who might check out the code and would like to run the unit tests.
I played a little with something like:
if test (count (make allUnitTests | grep "Stop")) -eq 0
but this has two problems:
I'm apparently piping stdout when I need to pipe stderr. (Come to think of it, if I could just check to see if anything was written to stderr, then I wouldn't need grep at all.)
Grep is swallowing all the log data piped to it, which I really want to be visible on the console.
You are misunderstanding the parentheses - these run a command substitution. What this does is capture the output of the process running in the substitution, which it will then use as arguments (separated by newlines by default) to the process outside.
This means your test will receive the full output of make.
What you instead want to do is just run if make allUnitTests without any parens, since you are just interested in the return value.
If you would like to do something between running make and checking its return value, the "$status" variable always contains the return value of the last command, so you can save that:
make allUnitTests
set -l makestatus $status
# Do something else
if test $makestatus -eq 0
# Do the if-thing
else
# Do the else-thing
end
Sorry, the question is pretty vague, but I hope someone still can help.
As I understand exec bash command, it replaces the code segment with what is specified by an argument. Practically replace the running script with something else.
But I am pretty sure I saw people using exec(not fork) in a loop to relaunch executable if it crashes or just exits with non-zero exit code. Unfortunately I can't find that piece of code now. Is it at all possible or am I imagining things?
I don't know specifically what you saw, but there are conceivable ways of using exec in a loop to launch and relaunch a process, e.g.
while true
do
( unset DISPLAY && exec ./myfile )
done
The ( .. ) here is an explicit subshell, so there is a fork even if it's not obvious.
Other conceivable reasons for putting exec in a loop include trying to exec different files or different paths, until you find one that works or the file is created or becomes available.
However, there is no way to successfully exec a process without any kind of implicit or explicit fork, and then loop around to exec itself again (unless the script ends up execing itself in a recursive way).
This is actually a more common problem than you'd think.
In the past, I've always implement a bash script to monitor if the process is there, and if it's not, restart it.
Here are some solutions that could work for you:
https://serverfault.com/questions/52976/simple-way-of-restarting-crashed-processes
I don't think that running a process on foreground is any way useful. So I'd like to run all process on background. Is that possible?
Also tell me if there is any problem associated with doing so.
You can adapt the code from this question: https://superuser.com/questions/175799/does-bash-have-a-hook-that-is-run-before-executing-a-command
Basically this uses the DEBUG trap to run a command before whatever you've typed on the command line. So, this:
preexec () { :; }
preexec_invoke_exec () {
[ -n "$COMP_LINE" ] && return # do nothing if completing
[ "$BASH_COMMAND" = "$PROMPT_COMMAND" ] && return # don't cause a preexec for $PROMPT_COMMAND
local this_command=$(HISTTIMEFORMAT= history 1);
preexec "$this_command" &
}
trap 'preexec_invoke_exec' DEBUG
Runs the command, but with & afterwards, backgrounding the process.
Note that this will have other rather weird effects on your terminal, and anything supposed to run in the foreground (command line browsers, mail readers, interactive commands, anything requiring input, etc.) will have issues.
You can try this out by just typing bash, which will execute another shell. Paste the above code, and if things start getting weird, just exit out of the shell and things will reset.
Do you mean bash script? Jush add & at the end. Example :
$ ./myscript &
While it might be possible to do something clever like suggested by #pgl, it's not a good idea. Processes running in the background don't show you their output in a useful way. So, if all processes are automatically sent to the background, your terminal will be flooded with their various standard output and standard error messages but you will have no way of knowing what came from what, your terminal will be next to useless and confusion will ensue.
So, yes there is a very good reason to keep processes in the foreground: to see what they're doing and be able to control them easily. To give an even more concrete example, any program that requires you to interact with it can't be run in the background. This includes things that ask for Continue [Y/N]? or things like sudo that ask for your password. If you just blindly make everything run int the background such commands will just silently hang.
I'm writing some scripts in Ruby, and I need to interface with some non-Ruby code via shell commands. I know there are at least 6 different ways of executing shell commands from Ruby, unfortunately, none of these seem to stop execution when a shell command fails.
Basically, I'm looking for something that does the equivalent of:
set -o errexit
...in a Bash script. Ideally, the solution would raise an exception when the command fails (i.e., by checking for a non-zero return value), maybe with stderr as a message. This wouldn't be too hard to write, but it seems like this should exist already. Is there an option that I'm just not finding?
Ruby 2.6 adds an exception: argument:
system('ctat nonexistent.txt', exception: true) # Errno::ENOENT (No such file or directory - ctat)
Easiest way would be to create a new function (or redefine an existing one) to call system() and check the error code.
Something like:
old_sys = system
def system(...)
old_system(...)
if $? != 0 then raise :some_exception
end
This should do what you want.
You can use one of ruby's special variables. The $? (analogous to the same shell script var).
`ls`
if $? == 0
# Ok to go
else
# Not ok
end
Almost every program sets this var to 0 if everything went fine.
Tiny bit simpler: you don't need to check $? w/ system, and since the command you ran will output to stderr itself you can usually just non-zero-exit rather than raising an exception w/ an ugly stack-trace:
system("<command>") || exit(1)
So you could take that a step further and do:
(system("<command 1>") &&
system("<command 2>") &&
system("<command 3>")) || exit(1)
...which would short-circuit and fail on error (in addition to being hard to read).
Ref: From Ruby 2.0 doc for system (although true of 1.8.7 as well):
system returns true if the command gives zero exit status, false for non zero exit status.
http://www.ruby-doc.org/core-2.0.0/Kernel.html#method-i-system