Show line numbers as script runs [duplicate] - bash

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Is there a way to debug a Bash script?
E.g., something that prints a sort of an execution log, like "calling line 1", "calling line 2", etc.

sh -x script [arg1 ...]
bash -x script [arg1 ...]
These give you a trace of what is being executed. (See also 'Clarification' near the bottom of the answer.)
Sometimes, you need to control the debugging within the script. In that case, as Cheeto reminded me, you can use:
set -x
This turns debugging on. You can then turn it off again with:
set +x
(You can find out the current tracing state by analyzing $-, the current flags, for x.)
Also, shells generally provide options '-n' for 'no execution' and '-v' for 'verbose' mode; you can use these in combination to see whether the shell thinks it could execute your script — occasionally useful if you have an unbalanced quote somewhere.
There is contention that the '-x' option in Bash is different from other shells (see the comments). The Bash Manual says:
-x
Print a trace of simple commands, for commands, case commands, select commands, and arithmetic for commands and their arguments or associated word lists after they are expanded and before they are executed. The value of the PS4 variable is expanded and the resultant value is printed before the command and its expanded arguments.
That much does not seem to indicate different behaviour at all. I don't see any other relevant references to '-x' in the manual. It does not describe differences in the startup sequence.
Clarification: On systems such as a typical Linux box, where '/bin/sh' is a symlink to '/bin/bash' (or wherever the Bash executable is found), the two command lines achieve the equivalent effect of running the script with execution trace on. On other systems (for example, Solaris, and some more modern variants of Linux), /bin/sh is not Bash, and the two command lines would give (slightly) different results. Most notably, '/bin/sh' would be confused by constructs in Bash that it does not recognize at all. (On Solaris, /bin/sh is a Bourne shell; on modern Linux, it is sometimes Dash — a smaller, more strictly POSIX-only shell.) When invoked by name like this, the 'shebang' line ('#!/bin/bash' vs '#!/bin/sh') at the start of the file has no effect on how the contents are interpreted.
The Bash manual has a section on Bash POSIX mode which, contrary to a long-standing but erroneous version of this answer (see also the comments below), does describe in extensive detail the difference between 'Bash invoked as sh' and 'Bash invoked as bash'.
When debugging a (Bash) shell script, it will be sensible and sane — necessary even — to use the shell named in the shebang line with the -x option. Otherwise, you may (will?) get different behaviour when debugging from when running the script.

I've used the following methods to debug my script.
set -e makes the script stop immediately if any external program returns a non-zero exit status. This is useful if your script attempts to handle all error cases and where a failure to do so should be trapped.
set -x was mentioned above and is certainly the most useful of all the debugging methods.
set -n might also be useful if you want to check your script for syntax errors.
strace is also useful to see what's going on. Especially useful if you haven't written the script yourself.

Jonathan Leffler's answer is valid and useful.
But, I find that the "standard" script debugging methods are inefficient, unintuitive, and hard to use. For those used to sophisticated GUI debuggers that put everything at your fingertips and make the job a breeze for easy problems (and possible for hard problems), these solutions aren't very satisfactory.
I do use a combination of DDD and bashdb. The former executes the latter, and the latter executes your script. This provides a multi-window UI with the ability to step through code in context and view variables, stack, etc., without the constant mental effort to maintain context in your head or keep re-listing the source.
There is guidance on setting that up in DDD and BASHDB.

I found the shellcheck utility and maybe some folks find it interesting.
A little example:
$ cat test.sh
ARRAY=("hello there" world)
for x in $ARRAY; do
echo $x
done
$ shellcheck test.sh
In test.sh line 3:
for x in $ARRAY; do
^-- SC2128: Expanding an array without an index only gives the first element.
Fix the bug. First try...
$ cat test.sh
ARRAY=("hello there" world)
for x in ${ARRAY[#]}; do
echo $x
done
$ shellcheck test.sh
In test.sh line 3:
for x in ${ARRAY[#]}; do
^-- SC2068: Double quote array expansions, otherwise they're like $* and break on spaces.
Let's try again...
$ cat test.sh
ARRAY=("hello there" world)
for x in "${ARRAY[#]}"; do
echo $x
done
$ shellcheck test.sh
Found now!
It's just a small example.

You can also write "set -x" within the script.

Install Visual Studio Code, and then add the Bash debug extension and you are ready to debug in visual mode. See it here in action.

Use Eclipse with the plugins Shelled and BashEclipse.
DVKit - Eclipse-based IDE for design verification tasks
BashEclipse
For Shelled: Download the ZIP file and import it into Eclipse via menu Help → Install new software: local archive. For BashEclipse: Copy the JAR files into the dropins directory of Eclipse
Follow the steps provided in BashEclipse files
I wrote a tutorial with many screenshots at Bash: enabling Eclipse for Bash Programming | Plugin Shelled (shell editor)

I built a Bash debugger: Bash Debuging Bash. Just give it a try.

set +x = #ECHO OFF, set -x = #ECHO ON.
You can add the -xv option to the standard shebang as follows:
#!/bin/bash -xv
-x : Display commands and their arguments as they are executed.
-v : Display shell input lines as they are read.
ltrace is another Linux utility similar to strace. However, ltrace lists all the library calls being called in an executable or a running process. Its name itself comes from library-call tracing.
For example:
ltrace ./executable <parameters>
ltrace -p <PID>
Source

I think you can try this Bash debugger: http://bashdb.sourceforge.net/.

Some trick to debug Bash scripts:
Using set -[nvx]
In addition to
set -x
and
set +x
for stopping dump.
I would like to speak about set -v which dump as smaller as less developed output.
bash <<<$'set -x\nfor i in {0..9};do\n\techo $i\n\tdone\nset +x' 2>&1 >/dev/null|wc -l
21
for arg in x v n nx nv nvx;do echo "- opts: $arg"
bash 2> >(wc -l|sed s/^/stderr:/) > >(wc -l|sed s/^/stdout:/) <<eof
set -$arg
for i in {0..9};do
echo $i
done
set +$arg
echo Done.
eof
sleep .02
done
- opts: x
stdout:11
stderr:21
- opts: v
stdout:11
stderr:4
- opts: n
stdout:0
stderr:0
- opts: nx
stdout:0
stderr:0
- opts: nv
stdout:0
stderr:5
- opts: nvx
stdout:0
stderr:5
Dump variables or tracing on the fly
For testing some variables, I use sometime this:
bash <(sed '18ideclare >&2 -p var1 var2' myscript.sh) args
for adding:
declare >&2 -p var1 var2
at line 18 and running the resulting script (with args), without having to edit them.
Of course, this could be used for adding set [+-][nvx]:
bash <(sed '18s/$/\ndeclare -p v1 v2 >\&2/;22s/^/set -x\n/;26s/^/set +x\n/' myscript) args
It will add declare -p v1 v2 >&2 after line 18, set -x before line 22 and set +x before line 26.
A little sample:
bash <(sed '2,3s/$/\ndeclare -p LINENO i v2 >\&2/;5s/^/set -x\n/;7s/^/set +x\n/' <(
seq -f 'echo $#, $((i=%g))' 1 8)) arg1 arg2
arg1 arg2, 1
arg1 arg2, 2
declare -i LINENO="3"
declare -- i="2"
/dev/fd/63: line 3: declare: v2: not found
arg1 arg2, 3
declare -i LINENO="5"
declare -- i="3"
/dev/fd/63: line 5: declare: v2: not found
arg1 arg2, 4
+ echo arg1 arg2, 5
arg1 arg2, 5
+ echo arg1 arg2, 6
arg1 arg2, 6
+ set +x
arg1 arg2, 7
arg1 arg2, 8
Note: Care about $LINENO. It will be affected by on-the-fly modifications!
(To see resulting script without executing, simply drop bash <( and ) arg1 arg2)
Step by step, execution time
Have a look at my answer about how to profile Bash scripts.

There's a good amount of detail on logging for shell scripts via the global variables of the shell. We can emulate a similar kind of logging in a shell script: Log tracing mechanism for shell scripts
The post has details on introducing log levels, like INFO, DEBUG, and ERROR. Tracing details like script entry, script exit, function entry, function exit.
Sample log:

Related

Export shell script commands on bash [duplicate]

This question already has answers here:
Can I export a variable to the environment from a Bash script without sourcing it?
(13 answers)
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh, but it's shorter to type and might work in some places where source doesn't.
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in \
NAME1=VALUE1 \
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo to set the environment variable TEREDO_WORMS:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL set in the environment to the C shell, and the environment variable TEREDO_WORMS is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit (or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i option to make an interactive shell). You could also add "$#" after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${#-'-i'}"
The "${#-'-i'}" bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i for the non-existent arguments'.
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset ):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun ):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'\0' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'\0' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX* to name the environment you want. The module command is basically a fancy alias for the eval mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$#"
else
kill -SIGUSR1 $$
echo "$#">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval \"\$CMD\"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent is to have the child process print an expression which the parent can eval.
bash$ eval $(shh-agent)
For example, ssh-agent has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh and tcsh uses setenv to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit.
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.

Use a variable on a script command line when its value isn't set until after the script starts

How to correctly pass to the script and substitute a variable that is already defined there?
My script test.sh:
#!/bin/bash
TARGETARCH=amd64
echo $1
When I enter:
bash test.sh https://example/$TARGETARCH
I want to see
https://example/amd64
but I actually see
https://example/
What am I doing wrong?
The first problem with the original approach is that the $TARGETARCH is removed by your calling shell before your script is ever invoked. To prevent that, you need to use quotes:
./yourscript 'https://example.com/$TARGETARCH'
The second problem is that parameter expansions only happen in code, not in data. This is, from a security perspective, a Very Good Thing -- if data were silently treated as code it would be impossible to write secure scripts handling untrusted data -- but it does mean you need to do some more work. The easy thing, in this case, is to export your variable and use GNU envsubst, as long as your operating system provides it:
#!/bin/bash
export TARGETARCH=amd64
substitutedValue=$(envsubst <<<"$1")
echo "Original value was: $1"
echo "Substituted value is: $substitutedValue"
See the above running in an online sandbox at https://replit.com/#CharlesDuffy2/EcstaticAfraidComputeranimation#replit.nix
Note the use of yourscript instead of test.sh here -- using .sh file extensions, especially for bash scripts as opposed to sh scripts, is an antipattern; the essay at https://www.talisman.org/~erlkonig/documents/commandname-extensions-considered-harmful/ has been linked by the #bash IRC channel on this topic for over a decade.
For similar reasons, changing bash yourscript to ./yourscript lets the #!/usr/bin/env bash line select an interpreter, so you aren't repeating the "bash" name in multiple places, leading to the risk of those places getting out of sync with each other.

Why bash cannot accept option of command as string?

I tried following code.
command1="echo"
"${command1}" 'case1'
command2="echo -e"
"${command2}" 'case2'
echo -e 'case3'
The outputs are following,
case1
echo -e: command not found
case3
The case2 results in an error but similar cases, case1 and case3 runs well. It seems command with option cannot be recognized as valid command.
I would like to know why it does not work. Please teach me. Thank you very much.
Case 1 (Unmodified)
command1="echo"
"${command1}" 'case1'
This is bad practice as an idiom, but there's nothing actively incorrect about it.
Case 2 (Unmodified)
command2="echo -e"
"${command2}" 'case2'
This is looking for a program named something like /usr/bin/echo -e, with the space as part of its name.
Case 2 (Reduced Quotes)
# works in this very specific case, but bad practice
command2="echo -e"
$command2 'case2' # WITHOUT THE QUOTES
...this one works, but only because your command isn't interesting enough (doesn't have quotes, doesn't have backslashes, doesn't have other shell syntax). See BashFAQ #50 for a description of why it isn't an acceptable practice in general.
Case X (eval -- Bad Practice, Oft Advised)
You'll often see this:
eval "$command1 'case1'"
...in this very specific case, where command1 and all arguments are hardcoded, this isn't exceptionally harmful. However, it's extremely harmful with only a small change:
# SECURITY BUGS HERE
eval "$command1 ${thing_to_echo}"
...if thing_to_echo='$(rm -rf $HOME)', you'll have a very bad day.
Best Practices
In general, commands shouldn't be stored in strings. Use a function:
e() { echo -e "$#"; }
e "this works"
...or, if you need to build up your argument list incrementally, an array:
e=( echo -e )
"${e[#]}" "this works"
Aside: On echo -e
Any implementation of echo where -e does anything other than emit the characters -e on output is failing to comply with the relevant POSIX standard, which recommends using printf instead (see the APPLICATION USAGE section).
Consider instead:
# a POSIX-compliant alternative to bash's default echo -e
e() { printf '%b\n' "$*"; }
...this not only gives you compatibility with non-bash shells, but also fixes support for bash in POSIX mode if compiled with --enable-xpg-echo-default or --enable-usg-echo-default, or if shopt -s xpg_echo was set, or if BASHOPTS=xpg_echo was present in the shell's environment at startup time.
If the variable command contains the value echo -e.
And the command line given to the shell is:
"$command" 'case2'
The shell will search for a command called echo -e with spaces included.
That command doesn't exist and the shell reports the error.
The reason of why this happen is depicted in the image linked below, from O'Reilly's Learning the Bash Shell, 3rd Edition:
Learning the bash Shell, 3rd Edition
By Cameron Newham
...............................................
Publisher: O'Reilly
Pub Date: March 2005
ISBN: 0-596-00965-8
Pages: 352
If the variable is quoted (follow the right arrows) it goes almost (passing steps 6,7, and 8) directly to execution in step 12.
Therefore, the command searched has not been split on spaces.
Original image (removed because of #CharlesDuffy complaint, I don't agree, but ok, let's move to the impossible to be in fault side) is here:
Link to original image in the web site where I found it.
If the command line given to the shell is un-quoted:
$command 'case2'
The string command gets expanded in step 6 (Parameter expansion) and then the value of the variable $command: echo -e gets divided in step 9: "Word splitting".
Then the shell search for command echo with argument -e.
The command echo "see" an argument of -e and echo process it as an option.
Trying to store commands inside an string is a very bad idea.
Try this, think very carefully of what you would expect the out put to be, and then be surprised on execution:
$ command='echo -e case2; echo "next line"'; $command
To take a look at what happens, execute the command as this:
$ set -vx; $command; set +vx
It works on my machine if I give the command this way:
cmd2="echo -e"
if you are still facing a problem I would suggest storing the options in another variable so that if you are doing shell scripting then multiple commands that use similar option values you can leverage the variable so also try something like this.
cmd1="echo"
opt1="-e"
$cmd1 $opt1 Hello

getting last executed command from script

I'm trying to get last executed command from command line from a script to be saved for a later reference:
Example:
# echo "Hello World!!!"
> Hello World!!!
# my_script.sh
> echo "Hello World!!!"
and the content of the script would be :
#!/usr/bin/ksh
fc -nl -1 | sed -n 1p
Now as you notices using here ksh and fc is a built in command which if understood correctly should be implemented by any POSIX compatible shells. [I understand that this feature is interactive and that calling same fc again will give different result but this is not the concern do discuss about]
Above works so far so good only if my_script.sh is being called from the shell which is as well ksh, or if calling from bash and changing 1st line of script as #!/bin/bash then it works too and it doesn't if shells are different.
I would like to know if there is any good way to achieve above without being constrained by the shell your script is called from. I understand that fc is a built in command and it works per shell thus most probably my approach is not good at all from what I want to achieve. Any better suggestions?
I actually attempted this, but it cannot be done between different shells consistently.
While
fc -l`,
is the POSIX standard command for showing $SHELL history, implementation details may be different.
At least bash and ksh93 both will report the last command with
fc -n -l -1 -1
However, POSIX does not guarantee that shell history will be carried over to a new instance of the shell, as this requires the presence of a $HISTFILE. If none is
present, the shell may default to $HOME/.sh_history.
However, this history file or Command History List is not portable between different shells.
The POSIX Shell description of the
Command History List says:
When the sh utility is being used interactively, it shall maintain a list of commands
previously entered from the terminal in the file named by the HISTFILE environment
variable. The type, size, and internal format of this file are unspecified.
Emphasis mine
What this means is that for your script needs to know which shell wrote that history.
I tried to use $SHELL -c 'fc -nl -1 -1', but this did not appear to work when $SHELL refers to bash. Calling ksh -c ... from bash actually worked.
The only way I could get this to work is by creating a function.
last_command() { (fc -n -l -1 -1); }
In both ksh and bash, this will give the expected result. Variations of this function can be used to write the result elsewhere. However, it will break whenever it's called
from a different process than the current.
The best you can do is to create these functions and source them into your
interactive shell.
fc is designed to be used interactively. I tested your example on cygwin/bash and the result was different. Even with bash everywhere the fc command didn't work in my case.
I think fc displays the last command of the current shell (here I don't speak about the shell interpretor, but shell as the "process box". So the question is more why it works for you.
I don't think there is a clean way to achieve what you want because (maybe I miss something) you want two different process (bash and your magic command [my_script.sh]) and by default OS ensure isolation between them.
You can rely on what you observe (not portable, depends on the shell interpretor etc.)
You cannot rely on BASH historic because it's in-memory (the file is updated only on exit).
You can use an alias or a function (edited: #Charles Duffy is right). In this case you won't be able to use your "magic command" from another terminal, but for an interactive use it does the job.
Edited:
Or you can provide two commands: one to save and another to look for. In this case you manage your own historic but you have to save explicitly each command that is painful...
So I look for a hook. And I found this other thread : https://superuser.com/questions/175799/does-bash-have-a-hook-that-is-run-before-executing-a-command
# At the beginning of the Shell (.bashrc for example)
save(){ history 1 >>"$HOME"/myHistory ; }
trap 'save' DEBUG
# An example of use
rm -f "$HOME"/myHistory
echo "1 2 3"
cat "$HOME"/myHistory
14 echo "1 2 3"
15 cat "$HOME"/myHistory
But I observe it slows down the interpretor...
Little convoluted, but I was able to use this command to get the most recent command in zsh, bash, ksh, and tcsh on Linux:
history | tail -2 | head -1 | sed -r 's/^[ \t]*[0-9]*[ \t]+([^ \t].*$)/\1/'
Caveats: this uses GNU sed, so you'll need to install that if you're using BSD, OS X, etc; and also, tcsh will display the time of the command before the command itself. Regular csh doesn't seem to having a functioning history command when I tried it.

Can a shell script set environment variables of the calling shell? [duplicate]

This question already has answers here:
Can I export a variable to the environment from a Bash script without sourcing it?
(13 answers)
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh, but it's shorter to type and might work in some places where source doesn't.
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in \
NAME1=VALUE1 \
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo to set the environment variable TEREDO_WORMS:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL set in the environment to the C shell, and the environment variable TEREDO_WORMS is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit (or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i option to make an interactive shell). You could also add "$#" after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${#-'-i'}"
The "${#-'-i'}" bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i for the non-existent arguments'.
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset ):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun ):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'\0' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'\0' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX* to name the environment you want. The module command is basically a fancy alias for the eval mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$#"
else
kill -SIGUSR1 $$
echo "$#">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval \"\$CMD\"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent is to have the child process print an expression which the parent can eval.
bash$ eval $(shh-agent)
For example, ssh-agent has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh and tcsh uses setenv to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit.
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.

Resources