Both of these work. The first is shorter, but I'm guessing the second is safer or better - running the command inside $()
If so, could someone explain why the second is safer, better, etc.
if my_function "something" ; then ...
if $(my_function "something"); then ...
The first is usually what you want. The first one executes the function, checks its exit status, and runs the then section if the status indicates the function succeeds.
The second (with $( )) runs the function, captures whatever your function prints to stdout, and then tries to execute that output as another command. If it doesn't output anything, nothing gets run, so the then section gets executed if your function succeeds (just like the first version). On the other hand, if it does output anything, that gets run as another command, and the then section gets run if that succeeds. If what it printed wasn't a valid command, that counts as failure.
So, the question is, does your function print a command you want executed? If so, use the second form. It not, use the first.
Related
I've built a little command interpreter (in C++) which can be invoked either directly, or in a script via shebang (#!). It can take arguments on the command line (which appear as argc/argv in my code).
Trouble is, when invoked via shebang, the script itself gets passed to my program as argument 1. That's problematic; I don't want my command interpreter trying to process the script that it was invoked from. But I can't see any easy way to tell when this is the case.
EDIT: As an example, if I have a script called "test" which starts with #!/usr/local/bin/miniscript, and then invoke it as ./test --help -c -foo, I get 5 arguments in my C code: /usr/local/bin/miniscript, ./test, --help, -c, and -foo. If I invoke it directly, then I get four arguments: /usr/local/bin/miniscript, --help, -c, and -foo
How can I tell when my program was invoked via a shebang, or otherwise know to skip the argument that represents the script it was invoked by?
My question was based on a wrong assumption. I believed that two things were happening when a program was invoked via shebang:
Path to that program was passed as the first argument.
Contents of that program were piped to stdin.
So I was essentially worried about processing the content twice. But only item 1 is true; item 2 does not happen (as pointed out by helpful commenters on my question). So if the C code accepts the name of a file to process as a first argument, and ignores any initial line starting with a shebang, then all is right with the world.
I'm trying to write a script in Fish that runs a Make recipe and then executes all of the resultant binaries. The problem I'm having is that I would like to have the script exit with an error code if the make command encounters an error. Whenever I try to capture Make's return value, I end up with its output log instead.
For example:
if test (make allUnitTests) -eq 0
echo "success"
else
echo "fail"
end
returns an error because "test" is seeing the build process, not the terminating condition.
I wrote the script so that I could easily make Jenkins run all my unit tests whenever I trigger a build. Since I haven't been able to get the above section of the script working correctly, I've instead instructed Jenkins to run the make command as a separate command, which does exactly what I want: halting the entire build process without executing any binaries if anything fails to compile. Thus, at this point my question is more of an academic exercise, but I would like to add building the unit test binaries into the script (and have it cleanly terminate on a build error) for the benefit of any humans who might check out the code and would like to run the unit tests.
I played a little with something like:
if test (count (make allUnitTests | grep "Stop")) -eq 0
but this has two problems:
I'm apparently piping stdout when I need to pipe stderr. (Come to think of it, if I could just check to see if anything was written to stderr, then I wouldn't need grep at all.)
Grep is swallowing all the log data piped to it, which I really want to be visible on the console.
You are misunderstanding the parentheses - these run a command substitution. What this does is capture the output of the process running in the substitution, which it will then use as arguments (separated by newlines by default) to the process outside.
This means your test will receive the full output of make.
What you instead want to do is just run if make allUnitTests without any parens, since you are just interested in the return value.
If you would like to do something between running make and checking its return value, the "$status" variable always contains the return value of the last command, so you can save that:
make allUnitTests
set -l makestatus $status
# Do something else
if test $makestatus -eq 0
# Do the if-thing
else
# Do the else-thing
end
I have a simple bash script that, so far, just reads the each line of a file and prints it. Simple enough:
while read i
do
echo $i
#otherViewDef=`grep -i $currentView $viewssqlfile`
done <$viewsdeffile
This script works as expected, unless the commented line is uncommented. If this is this case, the loop exits after echoing the first line of the file. I understand that this should not work as both currentView and viewsqlfile are unset, but what is the justification for this behavior as opposed to reporting an error and giving a non-zero return signal?
I think there's something different; this can't be the actual script, because the errors would be different. Assuming $currentView is set but $viewssqlfile is not, the assignment executes
grep -i $currentView
which reads from stdin, which means it greps the contents of $viewsdeffile. It finds no matches, so prints nothing. After that, the read i has nothing to read, returns false, and the loop exits.
In other words, if the controlling read of a loop reads from a redirected stdin, make sure no program in the loop body attempts to reads from stdin as well; they all share the same stdin.
Placing set -x near the top is likely to provide some insight.
I'm using a bash script to automatically run a simulation program. This program periodically prints the current status of the simulation in the console, like "Iteration step 42 ended normally".
Is it possible to abort the script, if the console output is something like "warning: parameter xyz outside range of validity"?
And what can I do, if the console output is piped to a text file?
Sorry if this sounds stupid, I'm new to this :-)
Thanks in advance
This isn't an ideal job for Bash. However, you can certainly capture and test STDOUT inside a Bash iteration loop using an admixture of conditionals, grep-like tools, and command substitution.
On the other hand, if Bash isn't doing the looping (e.g. it's just waiting for an external command to finish) then you need to use something like expect. Expect is purpose-built to monitor output streams for regular expressions, and perform branching based on expression matches.
I wrote a script that's retrieving the currently run command using $BASH_COMMAND. The script is basically doing some logic to figure out current command and file being opened for each tmux session. Everything works great, except when user runs a piped command (i.e. cat file | less), in which case $BASH_COMMAND only seems to store the first command before the pipe. As a result, instead of showing the command as less[file] (which is the actual program that has the file open), the script outputs it as cat[file].
One alternative I tried using is relying on history 1 instead of $BASH_COMMAND. There are a couple issues with this alternative as well. First, it does not auto-expand aliases, like $BASH_COMMAND does, which in some cases could cause the script to get confused (for example, if I tell it to ignore ls, but use ll instead (mapped to ls -l), the script will not ignore the command, processing it anyway), and including extra conditionals for each alias doesn't seem like a clean solution. The second problem is that I'm using HISTIGNORE to filter out some common commands, which I still want the script to be aware of, using history will just make the script ignore the last command unless it's tracked by history.
I also tried using ${#PIPESTATUS[#]} to see if the array length is 1 (no pipes) or higher (pipes used, in which case I would retrieve the history instead), but it seems to always only be aware of 1 command as well.
Is anyone aware of other alternatives that could work for me (such as another variable that would store $BASH_COMMAND for the other subcalls that are to be executed after the current subcall is complete, or some way to be aware if the pipe was used in the last command)?
i think that you will need to change a bit your implementation and use "history" command to get it to work. Also, use the command "alias" to check all of the configured alias.. the command "which" to check if the command is actually stored in any PATH dir. good luck