Different error management between RHEL6 and RHEL7 when sourcing files in tcsh - tcsh

Considering the following files on tcsh shell:
in ./foo:
echo $UNKNOWN_VARIABLE
in ./bar:
echo ok
in ./baz:
source ./foo ;
source ./bar
If I launch the command...
eval `cat baz`
...on RHEL6, I obtain the error 'UNKNOWN_VARIABLE: Undefined variable.' and the command exits with a non 0 status.
If I launch the same command on RHEL7, I obtain:
UNKNOWN_VARIABLE: Undefined variable.
ok
(terminal killed)
After further investigations regarding tcsh versions (thanks to Martin for the idea), I note a different behavior between tcsh 6.17.04 and 6.17.05.
With 6.17.04, tcsh behaves like if used with the -e option (the command stops at the first error in foo and bar is not sourced), with 6.17.05 bar is sourced (ok printed).
But there is still a difference between tcsh >= 6.17.05 I downloaded and compiled and the tcsh binary installed on my machine: the former does not crash the shell at the end and the latter is killed.
Could the signal handling be different in the 2 cases?
(I also tried to play with the onintr, hup and nohup commands without success)
Notes:
I must use tcsh (I know it is not the best shell)
It seems replacing ';' by '&&' in baz works fine but I prefer not to change this file.
Thanks a lot for any help.

Related

Detect if a script has been sourced in "/bin/sh"

MOST ANSWERS I FOUND ON HERE ONLY SEEM TO WORK FOR /bin/bash.
Tricks like $BASH_SOURCE and $SHLVL don't seem to be working with sh.
There was an answer which asked to use return, because it only works within functions and sourced scripts, and see if it generated any error but I didn't understand why on executing return on command-line I got logged out of the shell. If I "executed or sourced" a script containg return, it just exits that script. This was happening when I was on freebsd. Also I don't use any desktop environment there.
Simply typing on command line,
return
result: logged out
Executing or sourcing a script containing return:
$ cat testscript
#! /bin/sh
echo hello
return
echo hello
$ ./testscript
hello
$ . testscript
hello
$
This wasn't the case when I did the same on macOS(executed /bin/sh first). It worked perfectly fine there. There it just said
sh: return: can only `return' from a function or sourced script
just as expected.
I am looking for a solution to detect if a script is sourced in case of /bin/sh.
I am using freebsd and there I currently have default shell set to sh. I know I can install bash, but still I want to know how can I do the same for /bin/sh.
UPDATE:
I would like to mention a little more detail.
MacOS
In macOS I tried starting /bin/sh through command line, and I realised later that it is a non-login shell. So, when I types in logout there, reusult was:
sh: logout: not login shell: use `exit'
So I made /bin/sh my default shell and I am sure enough that /bin/sh was executed. When I typed in return there, the output I got is:
sh: return: can only `return' from a function or sourced script
Again just as expected. But when I typed, echo $SHELL, output was:
/bin/bash
And I checked /bin directory of of my machine and /bin/sh and /bin/bash don't seem to be linked.
FreeBSD
Now I tried executing /bin/sh there as well. The results were as follows:
$ /bin/sh
$ return
$ return
logged out on 2nd return
So in simple language it doesn't show any output if /bin/sh is a non-login shell and simply just exits that shell.
#user1934428 gave some nice amount of information in #CharlesDuffy 's answer. It's worth giving a read.
There he mentions that FreeBSD manual has no documentation for return statement.
sh man page, FreeBSD
I checked if OpenBSD has the same case for man page, but it did define return as:
return [n] Exit the current function or . script with exit status n, or that of the last command executed.
sh man page, OpenBSD
One other issue is most man pages show bash manual on asking for man sh. Idk if its supposed to be like that or no.
Also, can someone suggest if I should start a new question for undefined behaviour of return? Because I think this question has went really off-topic. Not sure if it would be a good idea to do so.
$ sh ./detect-sourcing.sh
We were executed
$ sh -c '. ./detect-sourcing.sh'
We were sourced
#!/bin/sh
if (return 2>/dev/null); then
echo "We were sourced"
else
echo "We were executed"
fi
I haven't analyzed whether this is strictly required by the POSIX sh standard, but it works with /bin/sh on MacOS, the OP's stated platform.
I found the answer to this through FreeBSD mailing lists.
The man page where the entry for return was missing was the wrong man page.
Looking at the correct man page, the complete behaviour of return statement has been stated.
The syntax of the return command is
return [exitstatus]
It terminates the current executional scope, returning from the closest
nested function or sourced script; if no function or sourced script is
being executed, it exits the shell instance. The return command is im-
plemented as a special built-in command.
As suggested by Ian in mailing lists, in case of /bin/sh a good possible way seems to keep a fixed name for your script and expand $0:
${0##*/}
and match it with the name. If the expansion produces anything else, it means script has been sourced. Another case could be that the user renamed it. So it's not completely error-prone but still should get my job done.
If you plan to use Bourne shell (/bin/sh) only testing $0 works nice.
$ cat t
#!/bin/sh
if [ $0 == "sh" ]; then
echo "sourced"
else
echo executed
fi
$ . t
sourced
$ . ./t
sourced
$ ./t
executed
$ sh t
executed
$ sh ./t
executed
If you want to call or source the script from other shells test $0 against list of shell names.
As #Mihir pointed FreeBSD shell works as described in manual page sh(1).
In MacOS /bin/sh is basically bash albeit files /bin/sh and /bin/bash slightly differ.
Note that comand man sh on Mac brings manual page for bash

How can I invoke both BASH and CSH shells in a same script

In the same script, I want to use some CSH commands and some BASH commands.
Invoking one after the other giving me problems despite I am following the different syntax for respective shells. I want to know where is mistake in my code
Your suggestions are appreciated!!
I am a beginner to shell, especially so for CSH. but the code I got has been written in CSH entirely. Since I have some familiarity with CSH, I wanted to tweak the existing CSH code by including BASH commands, which I am comfortable using it. When I tried BASH commands after CSH by invoking !#/bin/bash, it is giving some errors. I want to know if I am missing any options!!
#!/bin/csh
----
----
----
#!/bin/bash
dir2in="/nethome/achandra/NCEI/CCSM4_Historical/Forecasts"
filin2 ="ccsm4_0_cfsrr_Fcst.${ENS}.cam2.h1.${yyear[${iimonth}]}-${mmon[${iimonth}]}-${ssday}-00000.nc"
cp $dirin/$filin /nethome/achandra/NCEI/CCSM4_Historical_Forecasts/
ln -s /nethome/achandra/NCEI/CCSM4_Historical/Forecasts/$filin /nethome/achandra/NCEI/CCSM4_Historical_Forecasts/"${$filin%.nc.cdo}.nc"
#!/bin/csh
I am getting errors such as
"dirin: Undefined variable."
You are asking here for "embedding one language into another", which, as #Bayou already explained, is not supported directly. Maybe you were spoiled from the HTML-world, where you can squeeze CSS and Javascript in between and maybe use some server side PHP or Ruby stuff too.
The closest to this are HERE-documents. If you write inside your bash script a
csh <<CSH_END
your ...
csh ....
commands ...
go here ...
CSH_END
these commands are executed in a child process driven by csh. It works the other way around with bash in the same way. Make sure that the terminator symbol (CSH_END in my example) starts in column 1.
Whether this will work for your application, I can't say, because things which run in the same process in your original script, now run in different processes.
You can't mix them up like you're suggesting. It's like asking "can I use PHP code in a Python script". However, most of the shells have options to run commands (-c), just as csh does. For using Bash within a sh script:
#! /bin/sh
CONDITION=$(/bin/bash -c "[[ 1 > 2 ]] || echo no")
echo $CONDITION
exit 0
Otherwise you could create separate files and execute them.
#! /bin/sh
CONDITION=$(./bash-script.sh)
echo $CONDITION
exit 0
You, of course, should use csh instead of sh. Both of my scripts will output the following text.
$ ./test.sh
no

Bash syntax error executing command in Ruby, but it works in shell

Here is the command (the one I'm using is a slight variation of it, but this produces the same error)
HTTP_STATUS=$(curl -w "%{http_code}" -o >(cat >&3) 'http://example.org')
As for what this does, it's mostly copied from https://superuser.com/a/862395/334171 ... the point is to print the output of a HTTP request to the terminal, but store the status code in a bash variable. This works fine if I run it in terminal.
However, I get sh: 1: Syntax error: "(" unexpected when I run it from Ruby:
cmd = <<-SH
HTTP_STATUS=$(curl -w "%{http_code}" -o >(cat >&3) 'http://example.org')
SH
system cmd
`#{cmd}`
both of these fail with the aformentioned error.
I suppose as a workaround I could put in a shell script and call that from Ruby. But I'm curious why it's not working in the inline fashion.
bash behaves differently when you run using the name sh so /bin/bash and /bin/sh will behave differently even when /bin/sh really is /bin/bash. In particular, when bash is run as sh, it conforms as closely as possible to the POSIX specification so bash-specific extensions (such as your >(cat >&3)) won't work. Furthermore, backticks and related methods in Ruby always use the system shell (i.e. /bin/sh). In summary: if you're using a shell from within Ruby then you'll almost always end up using a strictly POSIX shell.
You could explicitly invoke /bin/bash and use -c to feed it commands. This will probably involve a nightmare of escaping though.
Better would be to bypass the shell (all of them) by using Open3 from the Ruby standard library. There are various methods in Open3 for capturing and piping the standard output and standard error and you won't have to worry about shell-quoting anything because there won't be a shell involved.
BTW, if you're really trying to set an environment variable through backticks or system, it won't work as environment variables are local to the child process so the parent (your Ruby script or whatever invoked your Ruby script) will never see them.

Why "echo ${ARRAY[0]}" behavours different between run script directory vs source it?

I have a bash script (array_test.sh) as below:
ARRAY=()
v="FOO"
ARRAY+=(${v})
v="BAR"
ARRAY+=(${v})
echo ${ARRAY[#]}
echo ${#ARRAY[#]}
echo ${ARRAY[0]}
When I run that script directly (./array_test.sh), I got the result as below:
FOO BAR
2
FOO
But when I source it (source ./array_test.sh), the last FOO is missing:
FOO BAR
2
Is that a bug or something wrong in my tiny script?
In ZSh, and perhaps some other shells, arrays are indexed from 1 rather than from 0.
So the problem is most likely that your command-line shell is not Bash. When you're running your script as an executable in its own process, it's running in Bash (or a shell that behaves as Bash does); when you're source-ing it inside your command-line shell, it's running in ZSh (or a shell that behaves as ZSh does).
(Hat-tip to Barmar's comment above, now deleted, that set me on this line of thought.)

getting last executed command from script

I'm trying to get last executed command from command line from a script to be saved for a later reference:
Example:
# echo "Hello World!!!"
> Hello World!!!
# my_script.sh
> echo "Hello World!!!"
and the content of the script would be :
#!/usr/bin/ksh
fc -nl -1 | sed -n 1p
Now as you notices using here ksh and fc is a built in command which if understood correctly should be implemented by any POSIX compatible shells. [I understand that this feature is interactive and that calling same fc again will give different result but this is not the concern do discuss about]
Above works so far so good only if my_script.sh is being called from the shell which is as well ksh, or if calling from bash and changing 1st line of script as #!/bin/bash then it works too and it doesn't if shells are different.
I would like to know if there is any good way to achieve above without being constrained by the shell your script is called from. I understand that fc is a built in command and it works per shell thus most probably my approach is not good at all from what I want to achieve. Any better suggestions?
I actually attempted this, but it cannot be done between different shells consistently.
While
fc -l`,
is the POSIX standard command for showing $SHELL history, implementation details may be different.
At least bash and ksh93 both will report the last command with
fc -n -l -1 -1
However, POSIX does not guarantee that shell history will be carried over to a new instance of the shell, as this requires the presence of a $HISTFILE. If none is
present, the shell may default to $HOME/.sh_history.
However, this history file or Command History List is not portable between different shells.
The POSIX Shell description of the
Command History List says:
When the sh utility is being used interactively, it shall maintain a list of commands
previously entered from the terminal in the file named by the HISTFILE environment
variable. The type, size, and internal format of this file are unspecified.
Emphasis mine
What this means is that for your script needs to know which shell wrote that history.
I tried to use $SHELL -c 'fc -nl -1 -1', but this did not appear to work when $SHELL refers to bash. Calling ksh -c ... from bash actually worked.
The only way I could get this to work is by creating a function.
last_command() { (fc -n -l -1 -1); }
In both ksh and bash, this will give the expected result. Variations of this function can be used to write the result elsewhere. However, it will break whenever it's called
from a different process than the current.
The best you can do is to create these functions and source them into your
interactive shell.
fc is designed to be used interactively. I tested your example on cygwin/bash and the result was different. Even with bash everywhere the fc command didn't work in my case.
I think fc displays the last command of the current shell (here I don't speak about the shell interpretor, but shell as the "process box". So the question is more why it works for you.
I don't think there is a clean way to achieve what you want because (maybe I miss something) you want two different process (bash and your magic command [my_script.sh]) and by default OS ensure isolation between them.
You can rely on what you observe (not portable, depends on the shell interpretor etc.)
You cannot rely on BASH historic because it's in-memory (the file is updated only on exit).
You can use an alias or a function (edited: #Charles Duffy is right). In this case you won't be able to use your "magic command" from another terminal, but for an interactive use it does the job.
Edited:
Or you can provide two commands: one to save and another to look for. In this case you manage your own historic but you have to save explicitly each command that is painful...
So I look for a hook. And I found this other thread : https://superuser.com/questions/175799/does-bash-have-a-hook-that-is-run-before-executing-a-command
# At the beginning of the Shell (.bashrc for example)
save(){ history 1 >>"$HOME"/myHistory ; }
trap 'save' DEBUG
# An example of use
rm -f "$HOME"/myHistory
echo "1 2 3"
cat "$HOME"/myHistory
14 echo "1 2 3"
15 cat "$HOME"/myHistory
But I observe it slows down the interpretor...
Little convoluted, but I was able to use this command to get the most recent command in zsh, bash, ksh, and tcsh on Linux:
history | tail -2 | head -1 | sed -r 's/^[ \t]*[0-9]*[ \t]+([^ \t].*$)/\1/'
Caveats: this uses GNU sed, so you'll need to install that if you're using BSD, OS X, etc; and also, tcsh will display the time of the command before the command itself. Regular csh doesn't seem to having a functioning history command when I tried it.

Resources