How can I check if a program exists within a fish script?
I know that there is no absolute solution with Bash, but using if type PROGRAM >/dev/null 2>&1; then... gave good results.
Is there something similar with fish?
There is type -q, as in
if type -q $program
# do stuff
end
which returns 0 if something is a function, builtin or external program (i.e. if it is something fish will execute).
There is also command -sq, which will return 0 only if it is an external program.
For both of these the "-q" flag silences all output. For command the "-s" makes it just look for a command instead of executing it directly.
Related
In the same script, I want to use some CSH commands and some BASH commands.
Invoking one after the other giving me problems despite I am following the different syntax for respective shells. I want to know where is mistake in my code
Your suggestions are appreciated!!
I am a beginner to shell, especially so for CSH. but the code I got has been written in CSH entirely. Since I have some familiarity with CSH, I wanted to tweak the existing CSH code by including BASH commands, which I am comfortable using it. When I tried BASH commands after CSH by invoking !#/bin/bash, it is giving some errors. I want to know if I am missing any options!!
#!/bin/csh
----
----
----
#!/bin/bash
dir2in="/nethome/achandra/NCEI/CCSM4_Historical/Forecasts"
filin2 ="ccsm4_0_cfsrr_Fcst.${ENS}.cam2.h1.${yyear[${iimonth}]}-${mmon[${iimonth}]}-${ssday}-00000.nc"
cp $dirin/$filin /nethome/achandra/NCEI/CCSM4_Historical_Forecasts/
ln -s /nethome/achandra/NCEI/CCSM4_Historical/Forecasts/$filin /nethome/achandra/NCEI/CCSM4_Historical_Forecasts/"${$filin%.nc.cdo}.nc"
#!/bin/csh
I am getting errors such as
"dirin: Undefined variable."
You are asking here for "embedding one language into another", which, as #Bayou already explained, is not supported directly. Maybe you were spoiled from the HTML-world, where you can squeeze CSS and Javascript in between and maybe use some server side PHP or Ruby stuff too.
The closest to this are HERE-documents. If you write inside your bash script a
csh <<CSH_END
your ...
csh ....
commands ...
go here ...
CSH_END
these commands are executed in a child process driven by csh. It works the other way around with bash in the same way. Make sure that the terminator symbol (CSH_END in my example) starts in column 1.
Whether this will work for your application, I can't say, because things which run in the same process in your original script, now run in different processes.
You can't mix them up like you're suggesting. It's like asking "can I use PHP code in a Python script". However, most of the shells have options to run commands (-c), just as csh does. For using Bash within a sh script:
#! /bin/sh
CONDITION=$(/bin/bash -c "[[ 1 > 2 ]] || echo no")
echo $CONDITION
exit 0
Otherwise you could create separate files and execute them.
#! /bin/sh
CONDITION=$(./bash-script.sh)
echo $CONDITION
exit 0
You, of course, should use csh instead of sh. Both of my scripts will output the following text.
$ ./test.sh
no
I have a C program which uses argv[0] inside the program. I understand that argv[0] is the path of the program being executed. I want to pass a custom string as argv[0] to the program instead of its program name. Is there a way to do this in shell?
I read about exec command. But I am unsure about the usage. help exec says I have to pass exec -a <string>
Is there any other way of doing this?
Is there any escape method which I need to use if I am passing special characters or path of another file using exec command?
To clarify the problem:
I am running a program prog1. To enter a particular section in the program I have to give a SIGALRM to the program. This step itself was difficult as I had to create a race around condition to send the signal right when the program starts.
while true;do ./prog1 2; done & while true; do killall -14 prog1; done
The above while loops help me to enter the part of program and that part of program uses argv[0] for a system call. This system call is system(echo something argv[0])
Is there a way to modify the above while loop and put ;/bin/myprogram instead of argv[0].
Bottom line: I need /bin/myprogram to be executed with the privilege of prog1 and it's output.
exec -a is precisely the way to solve this problem.
There are no restrictions that I know of on the string passed as an argument to exec. Normal shell quoting should be sufficient to pass anything you want (as long as it doesn't contain embedded NUL bytes, of course).
The problem with exec is that it replaces the current shell with the named command. If you just want to run a command, you need to spawn a new shell to be replaced; that is as simple as surrounding the command with parentheses:
$ ( exec -a '; /bin/myprogram' bash -c 'echo "$0"'; )
; /bin/myprogram
The brute-force method would be to create your own symlink and run the command that way.
ln -s /path/to/mycommand /tmp/newname
/tmp/newname arg1
rm /tmp/newname
The main problem with this is finding a secure, race-condition-free way to create the symlink that guarantees you run the command you intend to, which is why bash adds a non-standard -a extension to exec so that you don't need such file-system-based workarounds.
Typically, though, commands restrict their behavioral changes to a small, fixed set of possible names. This means that any such links can be created when the program is first installed, and don't need to be created on the fly. In this scenario, there is no need for exec -a, since all possible "virtual" executables already exist.
Is there a generic way in a bash script to "try" something but continue if it fails? The analogue in other languages would be wrapping it in a try/catch and ignoring the exception.
Specifically I am trying to source an optional satellite script file:
. $OPTIONAL_PATH
But when executing this, if $OPTIONAL_PATH doesn't exist, the whole script screeches to a halt.
I realize I could check to see if the file exists before sourcing it, but I'm curious if there is a generic reusable mechanism I can use that will ignore the error without halting.
Update: Apparently this is not normal behavior. I'm not sure why this is happening. I'm not explicitly calling set -e anywhere ($- is hB), yet it halts on the error. Here is the output I see:
./script.sh: line 36: projects/mobile.sh: No such file or directory
I added an echo "test" immediately after the source line, but it never prints, so it's not anything after that line that is exiting. I am running Mac OS 10.9.
Update 2: Nevermind, it was indeed shebanged as #!/bin/sh instead of #!/bin/bash. Thanks for the informative answer, Kaz.
Failed commands do not abort the script unless you explicitly configure that mode with set -e.
With regard to Bash's dot command, things are tricky. If we invoke bash as /bin/sh then it bails the script if the . command does not find the file. If we invoke bash as /bin/bash then it doesn't fail!
$ cat source.sh
#!/bin/sh
. nonexistent
echo here
$ ./source.sh
./source.sh: 3: .: nonexistent: not found
$ ed source.sh
35
1s/sh/bash/
wq
37
$ ./source.sh
./source.sh: line 3: nonexistent: No such file or directory
here
It does respond to set -e; if we have #!/bin/bash, and use set -e, then the echo is not reached. So one solution is to invoke bash this way.
If you want to keep the script maximally portable, it looks like you have to do the test.
The behavior of the dot command aborting the script is required by POSIX. Search for the "dot" keyword here. Quote:
If no readable file is found, a non-interactive shell shall abort; an interactive shell shall write a diagnostic message to standard error, but this condition shall not be considered a syntax error.
Arguably, this is the right thing to do, because dot is used for including pieces of the script. How can the script continue when a whole chunk of it has not been found?
Otherwise arguably, this is braindamaged behavior inconsistent with the treatment of other commands, and so Bash makes it consistent in its non-POSIX-conforming mode. If programmers want a command to fail, they can use set -e.
I tend to agree with Bash. The POSIX behavior is actually more broken than initially meets the eye, because this also doesn't work the way you want:
if . nonexistent ; then
echo loaded
fi
Even if the command is tested, it still aborts the script when it bails.
Thank GNU-deness we have alternative utilities, with source code.
You have several options:
Make sure set -e wasn't used, or turn it off with set +e. Your bash script should not exit by default simply because the . command failed.
Test that the file exists prior to sourcing.
[ -f "$OPTIONAL_PATH" ] && . "$OPTIONAL_PATH"
This option is complicated by the fact that if $OPTIONAL_PATH does not contain
any slashes, . will still try to find the file in your path.
If you want to keep set -e on, "hide" the failure like this:
. "$OPTIONAL_PATH" || true
Even if the source fails, the exit status of the command list as a whole will be 0, due to the || true.
(Much of this is covered [better] by Kaz's answer, especially the references to the POSIX standard, but I wasn't sure when or if he would undelete his answer.)
This is not the default behavior. Did you set -e or use #!/bin/bash -e anywhere in your script, to make it automatically exit on failure?
If so, you can use
. $OPTIONAL_PATH || true
to continue anyways.
I have certain critical bash scripts that are invoked by code I don't control, and where I can't see their console output. I want a complete trace of what these scripts did for later analysis. To do this I want to make each script self-tracing. Here is what I am currently doing:
#!/bin/bash
# if last arg is not '_worker_', relaunch with stdout and stderr
# redirected to my log file...
if [[ "$BASH_ARGV" != "_worker_" ]]; then
$0 "$#" _worker_ >>/some_log_file 2>&1 # add tee if console output wanted
exit $?
fi
# rest of script follows...
Is there a better, cleaner way to do this?
#!/bin/bash
exec >>log_file 2>&1
echo Hello world
date
exec has a magic behavior regarding redirections: “If command is not specified, any redirections take effect in the current shell, and the return status is 0. If there is a redirection error, the return status is 1.”
Also, regarding your original solution, exec "$0" is better than "$0"; exit $?, because the former doesn't leave an extra shell process around until the subprocess exits.
maybe you are looking for set -x?
you may check a common open source trace library with support for bash.
http://sourceforge.net/projects/utalm/
http://www.unifiedsessionsmanager.org/en/downloads.html
The current available component is for scripting by bash, soon available are Python and C++. Additional going to follow are: Ruby, Java, JavaScript, SQL, PowerShell,...
The license is Apache-2.0
WKR
Arno-Can Uestuensoez
I have a program (grabface) that takes a picture of the face of a person using a webcam, and I also have a shell script wrapper that works like this:
On the command line the user gives the script the name of a program to run and its command line arguments. The script then executes the given command and checks the exit code. If there was an error the program grabface is run to capture the surprised face of the user.
This all works quite well. But the problem is that the wrapper script must always be used. Is there some way to automatically run this script whenever a command is entered in the shell? Or is there some other way to automatically run a given program after any program is run?
Preferably the solution should work in bash, but any other shell is also OK. I realize this could be accomplished by simply making some adjustments in the source code of the shell, but that's kind of a last measure.
Something that is probably even trickier would be to extend this to work with programs launched outside of the shell as well (e.g. from a desktop environment) but this may be too difficult.
Edit: Awsome! Since bash was so easy, what about other shells?
In Bash, you can use the trap command with an argument of ERR to execute a specified command whenever an executed command returns non-zero.
$ trap "echo 'there was an error'" ERR
$ touch ./can_touch
$ touch ./asfdsafds/fdsafsdaf/fdsafdsa/fdsafdasfdsa/fdsa
touch: cannot touch `./asfdsafds/fdsafsdaf/fdsafdsa/fdsafdasfdsa/fdsa': No such file or directory
there was an error
trap affects the whole session, so you'll need to make sure that trap is called at the beginning of the session by putting it in .bashrc or .profile.
Other special trap signals that Bash understands are: DEBUG, RETURN and EXIT as well as all the system signals (which can be listed using trap -l).
The Korn shell has a similar facility, while the Z shell has a more extensive trap capability.
By the way, in some cases for the command line, it can be useful in Bash to set the PROMPT_COMMAND variable to a script or command that will be run each time the prompt is issued.
Just subtitute your command where I have the false.
false || echo "It failed"
If you want to do the oposite, like when it succeeds, just put your command instead of true:
true && echo "It succeeded"
In the .profile of the user add:
trap grabface ERR