Running CSH executable in bash throws syntax errors - bash

I need to run the program NACCESS on my MacOS X High Sierra.
The most prominent error is if: Expression Syntax.
This error is probably because the NACCESS executable is a CSH script with #!/bin/csh on its first line.
I changed the first line to #!/bin/bash and began getting errors such as
./naccess: line 14: syntax error near unexpected token `1'
./naccess: line 14: ` exit(1)'
Again, I think this is happening because the executable needs to be executed in a CSH shell, not bash. exit(1) is formatted for the CSH shell, whereas bash would want exit 1, according to this resource.
I tried typing csh into Terminal to switch shells, but an echo $SHELL command just tells me I'm still in bash.
I looked at my shell choices with grep '^#!' /usr/bin/* and cannot find any CSH shells. I attempted to download tcsh using homebrew. But I really have no idea what I'm doing at this point.
I know CSH shells aren't encouraged, but how do I get a CSH shell up and running on my Mac, or how do I otherwise get NACCESS to run?
[EDIT] Here are the first few lines of the NACCESS code:
(Note that I am running it with the .pdb file argument and also that I have set path_to_naccess_repository appropriately. I do get the usage naccess pdb_file [-p probe_size] [-r vdw_file] [-s stdfile] [-z zslice] -[hwyfaclqb] readout if I run only ./naccess)
#!/bin/csh
set EXE_PATH = /path/to/repo
#naccess_start
set nargs = $#argv
if ( $nargs < 1 ) then
echo "usage naccess pdb_file [-p probe_size] [-r vdw_file] [-s stdfile] [-z zslice] -[hwyfaclqb]"
exit(1)
endif
set PDBFILE = 0
set VDWFILE = 0
set STDFILE = 0
set probe = 1.40
set zslice = 0.05
set hets = 0
set wats = 0
set hyds = 0
set full = 0
set asao = 0
set cont = 0
set oldr = 0
set nbac = 0
while ( $#argv )
switch ($argv[1])
case -[qQ]:
echo "Naccess2.1 S.J.Hubbard June 1996"
echo "Usage: naccess pdb_file [-p probe_size] [-r vdw_file] [-s stdfile] [-z zslice] -[hwyfaclq]"
echo " "
echo "Options:"
echo " -p = probe size (next arg probe size)"
echo " -z = accuracy (next arg is accuracy)"
echo " -r = vdw radii file (next arg is filename)"
echo " -s = standard accessibilities file (next arg is filename)"
echo " -h = hetatoms ?"
echo " -w = waters ?"
echo " -y = hydrogens ?"
echo " -f = full asa output format ?"
echo " -a = produce asa file only, no rsa file ?"
echo " -c = output atomic contact areas in asa file instead of accessible areas"
echo " -l = old RSA output format (long)"
echo " -b = consider alpha carbons as backbone not sidechain"
echo " -q = print the usage line and options list"
echo " "
exit
breaksw
case -[pP]:
shift
set probe = $argv[1]
breaksw
case -[zZ]:
shift
set zslice = $argv[1]
breaksw
case -[hH]:
set hets = 1
breaksw
case -[wW]:
set wats = 1
breaksw
case -[yY]:
set hyds = 1
breaksw
case -[rR]:
shift
set VDWFILE = $argv[1]
breaksw
case -[sS]:
shift
set STDFILE = $argv[1]
breaksw
case -[fF]:
set full = 1
breaksw
case -[aA]:
set asao = 1
breaksw
case -[cC]:
set cont = 1
breaksw
case -[lL]:
set oldr = 1
breaksw
case -[bB]:
set nbac = 1
breaksw
default:
if ( -e $argv[1] && $PDBFILE == 0 ) then
set PDBFILE = $argv[1]
endif
breaksw
endsw
shift
end
#
if ( $PDBFILE == 0 ) then
echo "usage: you must supply a pdb format file"
exit(1)
endif
[EDIT] I did NOT set the path appropriately >.< So this question is mainly about how to get a csh script running in bash. There are no problems with this csh script.

The SHELL variable simply lists your first login shell, it will not change if you run another shell. echo $0 will show you you are running csh. Your options are either to stay in bash and change the shebang to tcsh:
#!/usr/bin/tcsh -f
or remove the shebang completely and run
csh ./naccess
If you are going for the first option, and not sure where csh is, just hit
which csh
to find the path. As the #Chepner adds, since the file comes with a csh shebang, don't touch it and just run it as above.

If the first line of a script starts with #!, called a "shebang", that specifies how the script is executed if you invoke it as a command. It gives the full path of the interpreter to invoke and, optionally, an argument to pass to the interpreter, along with the script name.
If you have a script called foo whose first line is
#!/bin/csh
then typing ./foo is equivalent to /bin/csh foo. (The file foo has to be executable; use chmod +x if it isn't already.)
This is completely independent of (a) your login shell, (b), the shell you're currently running, and (c) the shell specified by the $SHELL environment variable.
You can invoke a script by explicitly typing the name of the shell used to run it, but the whole point of the #! is that you don't have to do that.
It's possible that /bin/csh doesn't exist on your system, or that it's something other than the C-shell.
tcsh is an updated version of csh. You should be able to use it to execute any csh script.
Find out where csh or tcsh is installed on your system and update the path in the #! line to refer to it. You might need to install either csh or tcsh yourself -- but you said you typed csh and it worked, so that shouldn't be necessary.
From a bash shell, the command type csh will tell you where csh is installed.
Incidentally, the #! line for a csh or tcsh script should include a -f option:
#!/bin/csh -f
This tells the shell not to source the user's startup script ($HOME/.cshrc) when running the script. It saves time and ensures that the script is portable, not dependent on one user's environment. (This does not apply to sh or bash scripts; sh and bash have a -f option, but it has a completely different meaning.)
I tried typing csh into Terminal to switch shells, but an echo $SHELL
command just tells me I'm still in bash.
Invoking a shell doesn't change the value of your $SHELL environment variable. It's normally set to the path to your default login shell (but it can be changed).
You can tell if you're running tcsh by typing echo $tcsh or echo $version. (csh doesn't have these variables). You can tell if you're running bash by typing echo $BASH_VERSION.

Related

bash: `set -e` does not work when used in if-expression?

Have a look at this little script:
#!/bin/bash
function do_something() {(
set -e
mkdir "/opt/some_folder" # <== returns 1 -> abort?
echo "mkdir returned $?" # <== sets $0 to 0 again
rsync $( readlink -f "${BASH_SOURCE[0]}" ) /opt/some_folder/ # <== returns 23 -> abort?
echo "rsync returned $?" # <== sets $0 to 0 again
)}
# here every command inside `do_something` will be executed - regardless of errors
echo "run do_something in if-context.."
if ! do_something ; then
echo "running do_something did not work"
fi
# here `do_something` aborts on first error
echo "run do_something standalone.."
do_something
echo $?
I was trying to do what was suggested here (don't miss the extra parentheses introducing a sub-shell) but I didn't execute the function (do_something in my case) separately but together with the if-expression.
Now when I run if ! do_something the set -e command seems to have no effect.
Can someone explain this to me?
This is expected and described in the Bash Reference Manual.
-e
[...]
The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in an if statement, [...].
[...]
If a compound command or shell function executes in a context where -e is being ignored, none of the commands executed within the compound command or function body will be affected by the -e setting, even if -e is set and a command returns a failure status. If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes.
Using a function to change settings and traps overcomes this limitation, at least in Homebrew Bash 5.2.15(1)
If I start with this:
errexit_ignore() {
set +e
trap - ERR
}
errexit_fail() {
set -e
trap failed ERR
}
errexit_fail
I can later do horrible useful things like this:
for customization in ${customizations[#]}; do
log_mark 2 "Checking ${customization} ..."
errexit_ignore
diff -qs "${expected}/${customization}" "${tmpdir}/${customization}"
diff_exitcode="${?}"
errexit_fail
if [ "${diff_exitcode}" != "0" ]; then ## If it's not the expected file, the customization may have already been applied
errexit_ignore
diff -qs "${tmpdir}/${customization}" "${customized}/${customization}"
diff_exitcode="${?}"
errexit_fail
if [ "${diff_exitcode}" != "0" ]; then ## If it's neither the expected file nor the customized file, either default has changed or the customization has changed
errexit_ignore
git ls-files --error-unmatch "${customized}/${customization}"
track_exitcode="${?}" ## Detect untracked file
git diff --exit-code "${customized}/${customization}"
diff_exitcode="${?}" ## Detect modified tracked file
errexit_fail
if [ "${track_exitcode}" != "0" -o "${diff_exitcode}" != "0" ]; then ## If the customization has uncomitted changes, assume the default hasn't changed
log_mark 1 "Customized ${customization} will be updated"
else ## If the default has changed, manual review is needed (which may result in an updated customization)
diff -u "${expected}/${customization}" "${tmpdir}/${customization}" || :
diff -u "${tmpdir}/${customization}" "${customized}/${customization}" || :
abort "Default version of ${customization} has changed, expected version must be updated and customization must be checked for compatibility"
fi
else
log_mark 1 "Customized ${customization} already in place"
fi
else
log_mark 1 "Default ${customization} has not changed"
fi
done
Notes:
After trap function ERR, set +e doesn't work unless you also trap - ERR
Neither errexit_ignore nor errexit_fail can be defined on a single line (I'm not sure why not)

Why does eval exit subshell mid-&& with set -e?

Why does bash do what I'd expect here with a compound command in a subshell:
$ bash -x -c 'set -e; (false && true; echo hi); echo here'
+ set -e
+ false
+ echo hi
hi
+ echo here
here
But NOT do what I'd expect here:
$ bash -x -c 'set -e; (eval "false && true"; echo hi); echo here'
+ set -e
+ eval 'false && true'
++ false
Basically, the difference is between 'eval'-uating a compound command and just executing a compound command. When the shell executes a compound command, non-terminal commands in the compound command that fail do not cause the entire compound command to fail, they simply terminate the command. But when eval runs the compound command and any non-terminal sub-command terminates the command with an error, eval terminates the command with an error.
I guess I need to format my eval statement like this:
eval "false && true" || :
so that the eval command doesn't exit my subshell with an error, because this works as I'd expect it to:
$ bash -x -c 'set -e; (eval "false && true" || :; echo hi); echo here'
+ set -e
+ false
+ echo hi
hi
+ echo here
here
The problem I have with this is that I've written a function:
function execute() {
local command="$1"
local remote="$2"
if [ ! -z "$remote" ]; then
$SSH $remote "$command" || :
else
eval "$command" || :
fi
}
I'm using set -e in my script. The same problem occurs with ssh in this function - if the last command in the ssh script is a compound command that terminates early, the entire command terminates with an error. I want commands like this to behave as if they were executing locally - early terminating compound commands should not cause ssh or eval to return 1, failing the entire command. If I tack || : on the end of my eval statement or my ssh statement, then all such commands will succeed, even if they shouldn't because the last command in the eval'd or ssh'd command failed.
Any ideas would be much appreciated.
I should also mention that set -e is terribly error-prone; see http://mywiki.wooledge.org/BashFAQ/105 for a bunch of examples. So the best solution might be to dispense with it, and write your own logic to detect errors and abort.
That out of the way . . .
The problem here is that eval "false && true" is a single command, and evaluates to false (nonzero), so set -e aborts after that command runs.
If you were instead to run eval "false && true; true", you would not see this behavior, because then eval evaluates to true (zero). (Note that, although eval does implement the set -e behavior, it obeys the rule that false && true is non-aborting.)
This is not actually specific to eval, by the way. A subshell would give the same result, for the same reason:
$ bash -x -c 'set -e; (false && true); echo here'
+ set -e
+ false
The simplest fix for your problem is probably just to run an extra true if the end is reached:
$SSH $remote "set -e; $command; true"
eval "$command; true"
eval counts as its own command with its own exit code.
Since eval "false && true" returns an exit code of 1, it triggers set -e.

How to execute a file that is located in $PATH

I am trying to execute a hallo_word.sh that is stored at ~/bin from this script that is stored at my ~/Desktop. I have made both scripts executable. But all the time I get the problem message. Any ideas?
#!/bin/sh
clear
dir="$PATH"
read -p "which file you want to execute" fl
echo ""
for fl in $dir
do
if [ -x "$fl" ]
then
echo "executing=====>"
./$fl
else
echo "Problem"
fi
done
This line has two problems:
for fl in $dir
$PATH is colon separated, but for expects whitespace separated values. You can change that by setting the IFS variable. This changes the FIELD SEPARATOR used by tools like for and awk.
$fl contains the name of the file you want to execute, but you overwrite its value with the contents of $dir.
Fixed:
#!/bin/sh
clear
read -p "which file you want to execute" file
echo
IFS=:
for dir in $PATH ; do
if [ -x "$dir/$file" ]
then
echo "executing $dir/$file"
exec "$dir/$file"
fi
done
echo "Problem"
You could also be lazy and let a subshell handle it.
PATH=(whatever) bash command -v my_command
if [ $? -ne 0 ]; then
# Problem, could not be found.
else
# No problem
fi
There is no need to over-complicate things.
command(1) is a builtin command that allows you to check if a command exists.
The PATH value contains all the directories in which executable files can be run without explicit qualification. So you can just call the command directly.
#!/bin/sh
clear
# r for raw input, e to use readline, add a space for clarity
read -rep "Which file you want to execute? " fl || exit 1
echo ""
"$fl" || { echo "Problem" ; exit 1 ; }
I quote the name as it could have spaces.
To test if the command exists before execution use type -p
#!/bin/sh
clear
# r for raw input, e to use readline, add a space for clarity
read -rep "Which file you want to execute? " fl || exit 1
echo ""
type -p "$fq" >/dev/null || exit 1
"$fl" || { echo "Problem" ; exit 1 ; }

Setting a variable within a bash PS1

I'm trying to customize my bash prompt and I'm having trouble with a few conditionals.
My current PS1 looks like this.
export PS1="\
$PS1USERCOLOR\u\
$COLOR_WHITE#\
$COLOR_GREEN\h\
$COLOR_WHITE:\
$COLOR_YELLOW\W\
\`if type parse_git_branch > /dev/null 2>&1; then parse_git_branch; fi\`\
\`if [ \$? = 0 ]; then echo -e '$COLOR_WHITE'; else echo -e '$COLOR_RED'; fi\`\$\
$COLOR_WHITE"
The first 6 lines just set regular PS1 stuff.
Line 7 then calls a function to display the current git branch and status if applicable.
Line 8 then tests the return code of the previous command and changes the colour of the $ on the end.
Line 9 sets the prompt back to white ready for the user's command.
However line 8 is responding to the return code from line 7's function and not the previous command as I first expected.
I've tried moving line 8 before line 7 and eveything works as it should. But I don't want line 8 before line 7, the $ must be on the end.
I've tried setting a variable earlier on to be the value of $? and then testing that variable like so
export PS1="\
\`RETURN=\$?\`\
$PS1USERCOLOR\u\
$COLOR_WHITE#\
$COLOR_GREEN\h\
$COLOR_WHITE:\
$COLOR_YELLOW\W\
\`if type parse_git_branch > /dev/null 2>&1; then parse_git_branch; fi\`\
\`if [ \$RETURN = 0 ]; then echo -e '$COLOR_WHITE'; else echo -e '$COLOR_RED'; fi\`\$\
$COLOR_WHITE"
But this doesn't work.
Does anybody have any idea how to solve my problem?
The proper way is to use PROMPT_COMMAND like so:
prompt_cmd () {
LAST_STATUS=$?
PS1="$PS1USERCOLOR\u"
PS1+="$COLOR_WHITE#"
PS1+="$COLOR_GREEN\h"
PS1+="$COLOR_WHITE:"
PS1+="$COLOR_YELLOW\W"
if type parse_git_branch > /dev/null 2>&1; then
PS1+=$(parse_git_branch)
fi
if [[ $LAST_STATUS = 0 ]]; then
PS1+="$COLOR_WHITE"
else
PS1+="$COLOR_RED"
fi
PS1+='\$'
PS1+="$COLOR_WHITE"
}
Since PROMPT_COMMAND is evaluated prior to each prompt, you simply execute code that sets PS1 they way you like for each prompt instance, rather than trying to embed deferred logic in the string itself.
A couple of notes:
You must save $? in the first line of the code, before the value you want is overwritten.
I use double quotes for most of the steps, except for \$; you could use PS1+="\\\$" if you like.
The standard solution to this problem is to make use of the bash environment variable PROMPT_COMMAND. If you set this variable to the name of a shell function, said function will be executed before each bash prompt is shown. Then, inside said function, you can set up whatever variables you want. Here's how I do almost exactly what you're looking for in my .bashrc:
titlebar_str='\[\e]0;\u#\h: \w\a\]'
time_str='\[\e[0;36m\]\t'
host_str='\[\e[1;32m\]\h'
cwd_str='\[\e[0;33m\]$MYDIR'
git_str='\[\e[1;37m\]`/usr/bin/git branch --no-color 2> /dev/null | /bin/grep -m 1 ^\* | /bin/sed -e "s/\* \(.*\)/ [\1]/"`\[\e[0m\]'
dolr_str='\[\e[0;`[ $lastStatus -eq 0 ] && echo 32 || echo 31`m\]\$ \[\e[0m\]'
export PS1="$titlebar_str$time_str $host_str $cwd_str$git_str$dolr_str"
function prompt_func {
# Capture the exit status currently in existence so we don't overwrite it with
# any operations performed here.
lastStatus=$?
# ... run some other commands (which will have their own return codes) to set MYDIR
}
export PROMPT_COMMAND=prompt_func
Now bash will run prompt_func before displaying each new prompt. The exit status of the preceding command is captured in lastStatus. Because git_str, dolr_str, etc. are defined with single quotes, the variables (including lastStatus) and commands inside them are then re-evaluated when bash dereferences PS1.
Solved it!
I need to use the PROMPT_COMMAND variable to set the RETURN variable. PROMPT_COMMAND is a command which is called before PS1 is loaded.
My script now looks like
PROMPT_COMMAND='RETURN=$?'
export PS1="\
$PS1USERCOLOR\u\
$COLOR_WHITE#\
$COLOR_GREEN\h\
$COLOR_WHITE:\
$COLOR_YELLOW\W\
\`if type parse_git_branch > /dev/null 2>&1; then parse_git_branch; fi\`\
\`if [[ \$RETURN = 0 ]]; then echo -e '$COLOR_WHITE'; else echo -e '$COLOR_RED'; fi\`\$\
$COLOR_WHITE"

Bash variables expansion (possible use of eval) in for-do loop

I am studying the book "Beginning Linux Programming 4th ed" and chapter 2 is about shell programming. I was impressed by the example on page 53, and tried to develop a script to display more on that. Here is my code:
enter code here
#!/bin/bash
var1=10
var2=20
var3=30
var4=40
for i in 1 2 3 4 # This works as intended!
do
x=var$i
y=$(($x))
echo $x = $y # But we can avoid declaring extra parameters x and y, see next line
printf " %s \n" "var$i = $(($x))"
done
for j in 1 2 3 4 #This has problems!
do
psword=PS$j
#eval psval='$'PS$i # Produces the same output as the next line
eval psval='$'$psword
echo '$'$psword = $psval
#echo "\$$psword = $psval" #The same as previous line
#echo $(eval '$'PS${i}) #Futile attempts
#echo PS$i = $(($PS${i}))
#echo PS$i = $(($PS{i}))
done
#I can not make it work as I want : the output I expect is
#PS1 = \[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\u#\h:\w\$
#PS2 = >
#PS3 =
#PS4 = +
How can I get the intended output? When I run it as it is I only get
PS1 =
PS2 =
PS3 =
PS4 = +
What happened with PS1 and PS2 ?
Why do not I get the same value that I get with
echo $PS1
echo $PS2
echo $PS3
echo $PS4
because that was what I am trying to get.
Shell running a script is always non interactive shell. You may force to run the script in interactive mode using '-i' option:
Try to change:
#!/bin/bash
to:
#!/bin/bash -i
see INVOCATION section in 'man bash' (bash.bashrc is where your PS1 is defined):
When an interactive shell that is not a login shell is started, bash reads and executes commands from
/etc/bash.bashrc and ~/.bashrc, if these files exist. This may be inhibited by using the --norc option. The
--rcfile file option will force bash to read and execute commands from file instead of /etc/bash.bashrc and
~/.bashrc.
When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in
the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read
and execute. Bash behaves as if the following command were executed:
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
but the value of the PATH variable is not used to search for the file name.
you can also read: http://tldp.org/LDP/abs/html/intandnonint.html
simple test:
$ cat > test.sh
echo "PS1: $PS1"
$ ./test.sh
PS1:
$ cat > test.sh
#!/bin/bash -i
echo "PS1: $PS1"
$ ./test.sh
PS1: ${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u#\h\[\033[01;34m\] \w \$\[\033[00m\]
Use indirect expansion:
for j in 0 1 2 3 4; do
psword="PS$j"
echo "$psword = ${!psword}"
done

Resources