Is there a way to make bash warn if a variable is undefined but prevent script execution from aborting?
I'm looking for something similar to set -u, except that set -u aborts execution and I would like the script to warn, but continue execution when it finds undefined variables.
I know I can check whether a variable is set , but my scripts have hundred of variables and I'm looking for a solution that avoids checking variables one by one.
You can do it like this:
$ cat /tmp/q.sh
#!/bin/bash
set -u
echo 1
echo $2
echo 3
$ bash --norc --noprofile --noediting -i /tmp/q.sh
1
bash: $2: unbound variable
3
That is, force interactive mode (-i) when running the script, which prevents bash from aborting with set -u, but you have to also use --norc --noprofile --noediting (maybe some things more?), to make it behave more like a non-interactive shell.
I can't tell if this is expected or unexpected behavior, though (it doesn't work when using -c). Works on versions 4.2.46(2)-release (Oracle Linux 7), 4.1.2(1)-release (Oracle Linux 6) and 5.1.16(1)-release (Arch Linux).
Related
I have written the following code:
#!/bin/bash
#Simple array
array=(1 2 3 4 5)
echo ${array[*]}
And I am getting error:
array.sh: 3: array.sh: Syntax error: "(" unexpected
From what I came to know from Google, that this might be due to the fact that Ubuntu is now not taking "#!/bin/bash" by default... but then again I added the line but the error is still coming.
Also I have tried by executing bash array.sh but no luck! It prints blank.
My Ubuntu version is: Ubuntu 14.04
Given that script:
#!/bin/bash
#Simple array
array=(1 2 3 4 5)
echo ${array[*]}
and assuming:
It's in a file in your current directory named array.sh;
You've done chmod +x array.sh;
You have a sufficiently new version of bash installed in /bin/bash (you report that you have 4.3.8, which is certainly new enough); and
You execute it correctly
then that should work without any problem.
If you execute the script by typing
./array.sh
the system will pay attention to the #!/bin/bash line and execute the script using /bin/bash.
If you execute it by typing something like:
sh ./array.sh
then it will execute it using /bin/sh. On Ubuntu, /bin/sh is typically a symbolic link to /bin/dash, a Bourne-like shell that doesn't support arrays. That will give you exactly the error message that you report.
The shell used to execute a script is not affected by which shell you're currently using or by which shell is configured as your login shell in /etc/passwd or equivalent (unless you use the source or . command).
In your own answer, you say you fixed the problem by using chsh to change your default login shell to /bin/bash. That by itself should not have any effect. (And /bin/bash is the default login shell on Ubuntu anyway; had you changed it to something else previously?)
What must have happened is that you changed the command you use from sh ./array.sh to ./array.sh without realizing it.
Try running sh ./array.sh and see if you get the same error.
Instead of using sh to run the script,
try the following command:
bash ./array.sh
I solved the problem miraculously. In order to solve the issue, I found a link where it was described to be gone by using the following code. After executing them, the issue got resolved.
chsh -s /bin/bash adhikarisubir
grep ^adhikarisubir /etc/passwd
FYI, "adhikarisubir" is my username.
After executing these commands, bash array.sh produced the desired result.
This question already has an answer here:
How can you ask bash for the current options?
(1 answer)
Closed 6 years ago.
Today I found that variable '$-' is a random string, but I don't know what it stands for.
➜ ~ echo $-
569JNRTXZghikms
And I can't change the value:
➜ ~ -='sss'
zsh: command not found: -=sss
➜ ~
And, in a docker it was:
➜ ~ docker run --rm -ti ubuntu
root#7084255fd54e:/# echo $-
himBH
Or:
➜ ~ docker run --rm -ti alpine ash
/ # echo $-
smi
Does it's value related to the system?
$- current options set for the shell.
From the Bash Reference Manual:
Using ‘+’ rather than ‘-’ causes these options to be turned off. The
options can also be used upon invocation of the shell. The current set
of options may be found in $-.
The remaining N arguments are positional parameters and are assigned,
in order, to $1, $2, … $N. The special parameter # is set to N.
The return status is always zero unless an invalid option is supplied.
$- gives you the current options set for the shell.
See the accepted answer from this question for the various other special dollar variables available:
What are the special dollar sign shell variables?
As you are using zsh, run this command:
LESS=+/'PARAMETERS SET BY THE SHELL' man zshparam
to find:
- <S> Flags supplied to the shell on invocation or by the set or setopt commands.
For bash (from Docker), run this command:
LESS=+/'^ *Special Parameters' man bash
To read:
- Expands to the current option flags as specified upon invocation, by the set builtin command, or those set by the shell itself (such as the -i option).
My script works if I run it interactively on command shell:
$ cat ndmpcopy_cron_parallel_svlinf05.bash
#!/usr/software/bin/bash
ndmpcopy_cron_parallel() {
timestamp=`date +%Y%m%d-%H%M`
LOG=/x/eng/itarchives/ndmpcopylogs/05_$1/ndmpcopy_status
TSLOG=${LOG}_$timestamp
src_filer='svlinf05'
src_account='ndmp'
src_passwd='src_passwd'
dst_svm='svlinfsrc'
dst_account='vsadmin-backup'
dst_passwd='dst_passwd'
host=`hostname`
echo $host
ssh -l root $src_filer "priv set -q diag ; ndmpcopy -sa $src_account:$src_passwd -da $dst_account:$dst_passwd -i $src_filer.eng.netapp.com:/vol/$1 10.56.10.161:/$dst_svm/$1" | tee -a $TSLOG
echo "ndmpcopy Completed: `date` "
}
export -f ndmpcopy_cron_parallel
/u/jsung/bin/parallel -j 0 --wd . --env ndmpcopy_cron_parallel --eta ndmpcopy_cron_parallel ::: local
But, the script failed and complained the exported function, ndmpcopy_cron_parallel, cannot be found:
$ crontab -l
40 0,2,4,6,8,10,12,14,16,18,20,22 * * * /u/jsung/bin/ndmpcopy_cron_parallel_svlinf05.bash
Error:
Subject: Cron <jsung#cycrh6svl18> /u/jsung/bin/ndmpcopy_cron_parallel_svlinf05.bash
Computers / CPU cores / Max jobs to run
1:local / 2 / 1
Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 0s Left: 1 AVG: 0.00s local:1/0/100%/0.0s **/bin/bash: ndmpcopy_cron_parallel: command not found**
ETA: 0s Left: 0 AVG: 0.00s local:0/1/100%/0.0s
I've been searched around and trying different things for a while. I even tweaked $PATH. Not sure what I missed. Can we embed GNU Parallel in BASH script and put in crontab at all?
Congratulations. You've been shell-shocked.
You have two versions of bash installed on your system:
/bin/bash v4.1.2 An older unpatched bash
/usr/software/bin/bash v4.2.53 A middle-aged bash, patched against Shellshock
The last number in the bash version triple is the patch-level. The Shellshock bug involved a number of patches, but the relevant one is 4.1.14, 4.2.50 and 4.3.27. That patch changes the format of exported functions, with the consequence that:
If you export a function from a pre-shellshock bash to a post-shellshock bash, you will see a warning and the exported function will be rejected.
If you export a function from a post-shellshock bash to a pre-shellshock bash, the function export format won't be recognized so it will be silently ignored.
In both cases, the function will not be exported. In other words, you can only export a function between two bash versions if they have both been shellshock patched, or if neither have been shellshock patched.
Your script clearly indicates which bash to use to run it: the one in /usr/software/bin/bash, which has been patched. The script invokes GNU parallel, and GNU parallel then has to start up one or more subshells in order to run the commands. GNU parallel uses the value of the SHELL environment variable to find the shell it should use.
I suppose that in your user command shell environment, SHELL is set to /usr/software/bin/bash, and that in the environment in which cron executes, it is set to /bin/bash. If that's the case, you'll have no problems exporting the function when you try it from a bash prompt, but in the cron environment you will end up trying to export a function from a post-shellshock bash to a pre-shellshock bash, and as described above the result is that the export is silently ignored. Hence the error.
To get around the problem, you need to ensure that you use the bash used to run the command script is the same as the bash used by GNU parallel. You could, for example, explicitly set the shell prior to invoking GNU parallel.
export SHELL=/usr/software/bin/bash
# ...
/u/jsung/bin/parallel -j 0 --wd . --env ndmpcopy_cron_parallel --eta ndmpcopy_cron_parallel ::: local
Or you could just set it for the parallel command itself:
SHELL=/usr/software/bin/bash /u/jsung/bin/parallel -j 0 --wd . --env ndmpcopy_cron_parallel --eta ndmpcopy_cron_parallel ::: local
As rici says, the problem is most likely due to shellshock. Shellshock did not affect GNU Parallel, but the patches to fix shellshock broke transferring of functions using '--env'.
GNU Parallel is catching up with the shellshock patches in Bash: Bash has used BASH_FUNC_myfunc() as the variable name for exporting functions, but more recent versions use BASH_FUNC_myfunc%%. So GNU Parallel needs to know this when transferring a function.
The '()' version is fixed in 20141022, and the '%%' version is expected to be fixed in 20150122. They should work in any combination. So your remote Bash does not need to be patched the same way as your local Bash: GNU Parallel will "do the right thing", and there is no need to change your own code.
You should feel free to test out the git version in which both are fixed: git clone git://git.savannah.gnu.org/parallel.git
My default shell is bash. I have set some environment variables in my .bashrc file.
I installed a program which use .cshrc file. It contains the path to several cshell scripts.
When I run the following commands in the shell windows it works perfectly :
exec csh
source .cshrc
exec bash
I have tried to put these commands in bash script, unfortunately it didn't work.
is there another way to write a script in order to get the same result as running commands from a shell windows.
I hope my question is now clear
Many thanks for any help
WARNING : don't put the following script in your .bashrc, it will reload bash and so reload .bashrc again and again (stopable with C-c anyway)
Use preferable this script in your kit/CDS stuff startup script. (cadence presumably)
WARNING 2 : if anything in your file2source fails, the whole 'trick' stops.
Call this script : cshWrapper.csh
#! /bin/csh
# to launch using
# exec cshWrapper.csh file2source.sh
source $1
exec $SHELL -i
and launch it using
exec ./cshWrapper.csh file2source.sh
it will : launch csh, source your file and came back to the same parrent bash shell
Example :
$> ps
PID TTY TIME CMD
7065 pts/0 00:00:02 bash
$>exec ./cshWrapper.csh toggle.csh
file sourced
1
$> echo $$
7065
where in my case i use the file toggle.csh
#! /bin/csh
# source ./toggle.csh
if ! $?TOGGLE then
setenv TOGGLE 0
endif
if ($?TOGGLE) then
echo 'file sourced'
if ($TOGGLE == 0) then
setenv TOGGLE 1
else
setenv TOGGLE 0
endif
endif
echo $TOGGLE
Hope it helps
New proposal, since I faced another problem with exec.
exec kills whatever remains in the script, except if you force a fork by using a pipe after it `exec script |cat'. In such case if you have environment variable in the script, they are not spread back to the script itself, which is not what we want. The only solution I found is to use 3 files (let's call them for the example : main.bash that call first.cshrc and second.sh).
#! /bin/bash
#_main.bash_
exec /bin/csh -c "source /path_to_file/cshrc; exec /bin/bash -i -c /path_to_file/second.sh"
# after exec nothing remains (like Attila the Hun)
# the rest of the script is in 'second.sh'
With that manner, i can launch in a single script call, an old cshrc design kit, and still process some bash command after, and finally launch the main program in bash (let say virtuoso)
Is it possible to source a .bshrc file from .cshrc in a non-interactive session?
I'm asking because tcsh is our default shell at work and the .cshrc has to be used to set up the environment initially.
However, I am not really familiar with the tcsh and I have my own set-up in bash, so right now I have the following lines at the end of my .cshrc file:
if ( $?prompt && -x /bin/bash) then
exec /bin/bash
endif
This works fine, loading my environment from .bashrc and giving me a bash prompt for interactive sessions but now I also need the same set-up for non-interactive sessions, e.g. to run a command remotely via SSH with all the correct PATHs etc.
I can't use 'exec' in that case but I can't figure out how to switch to bash and load the bash config files "non-interactively".
All our machines share the same home directory, so any changes to my local *rc files will affect the remote machiens as well.
Any ideas welcome - thank you for your help!
After some more research I'm now quite sure that this won't work, but of course feel free to prove me wrong!
To load the environment in bash I have to switch to a bash shell. Even if that is possible "in the background", i.e. without getting a prompt, it would still break any tcsh commands which would then be attempted to execute under bash.
Hmmmm, back to the drawing board...
If $command is set there are arguments to csh, so it is a remote shell command. This works for me in .cshrc:
if ($?command) then
echo Executing non-interactive command in bash: $command $*
exec /bin/bash -c "${command} $*"
endif
echo Interactive bash shell
exec bash -l
Test:
$ ssh remotehost set | grep BASH
BASH=/bin/bash
...
proves that it ran in Bash.