How to echo script invocation without variable expansion of its args - bash

From within a bash script, I'd like to echo script invocation without expanding variables passed as arguments.
Echoing script invocation with expanded variables can be achieved with
echo "${BASH_SOURCE[0]} ${*}"
Echoing (the script's, or any other comand's) history using
echo "$(tail -n 1 ~/.bash_history)"
shows script invocation without variable expansions, as desired, however not for the running script (only for scripts completed).
How to echo script invocation without variable expansion of its arguments from within the running script?

If you can execute your script with bash -c script args, what you want is doable with the BASH_EXECUTION_STRING variable:
$ bash -c 'echo "$BASH_EXECUTION_STRING"'
echo "$BASH_EXECUTION_STRING"
This output is not easy to understand but you can see that the echo command, when executed, prints the unexpanded command. This is because the value of the BASH_EXECUTION_STRING variable is the literal: echo "$BASH_EXECUTION_STRING".
So, if your script is, for instance:
#!/usr/bin/env bash
script="$0"
cmd="$1"
shift
echo "script name: $script"
echo "command line: $cmd"
echo "parameter: $1"
you can execute it as:
$ a=42 bash -c './foo.sh "$BASH_EXECUTION_STRING" "$a"'
script name: ./foo.sh
command line: ./foo.sh "$BASH_EXECUTION_STRING" "$a"
parameter: 42

Related

Can I put a breakpoint in shell script?

Is there a way to suspend the execution of the shell script to inspect the state of the environment or execute random commands?
alias combined with eval gives you basic functionality of breakpoints in calling context:
#!/bin/bash
shopt -s expand_aliases
alias breakpoint='
while read -p"Debugging(Ctrl-d to exit)> " debugging_line
do
eval "$debugging_line"
done'
f(){
local var=1
breakpoint
echo $'\n'"After breakpoint, var=$var"
}
f
At the breakpoint, you can input
echo $var
followed by
var=2
then Ctrl-d to exit from breakpoint.
Due to eval in the while loop, use with caution.
Bash or shell scripts do not have such debugging capabilities as other programming languages like Java, Python, etc.
We can put the echo "VAR_NAME=$VAR_NAME" command in the code where we want to log the variable value.
Also, a little bit more flexible solution is to put this code somewhere at the beginning in the shell script we want to debug:
function BREAKPOINT() {
BREAKPOINT_NAME=$1
echo "Enter breakpoint $BREAKPOINT_NAME"
set +e
/bin/bash
BREAKPOINT_EXIT_CODE=$?
set -e
if [[ $BREAKPOINT_EXIT_CODE -eq 0 ]]; then
echo "Continue after breakpoint $BREAKPOINT_NAME"
else
echo "Terminate after breakpoint $BREAKPOINT_NAME"
exit $BREAKPOINT_EXIT_CODE
fi
}
export -f BREAKPOINT
and then later, at the line of code where we need to break we invoke this function like this:
# some shell script here
BREAKPOINT MyBreakPoint
# and some other shell script here
So then the BREAKPOINT function will log some output then launch /bin/bash where we can run any echo or some other shell command we want. When we want to continue running the rest of the shell script (release breakpoint) we just need to execute exit command. If we need to terminate script execution we would run exit 1 command.
There exist solutions like bash-debug.
A poor-man's solution which works for me is the interactive shell.
By adding three lines of code, you can introspect and alter variables as follows:
Let's assume, that you have the script test.bash
A=FOO
export B=BAR
echo $A
echo $B
$ test.bash
FOO
BAR
If you add an interactive shell at line 3, you can look around and inspect variables which have been exported before:
A=FOO
export B=BAR
bash -c "$SHELL"
echo $A
echo $B
$ test.bash
$ echo $A
$ echo $B
BAR
$ exit
FOO
BAR
If you want to see all variables in your interactive shell, you have to add set -a to the preamble of your script, such that all variables and functions are exported:
set -a
A=FOO
export B=BAR
bash -c "$SHELL"
echo $A
echo $B
$ test.bash
$ echo $A
FOO
$ echo $B
BAR
$ exit
FOO
BAR
Note, that you cannot change the variables in your interactive shell. The only solution for me is to source an additional script of variables, which will be sourced rightafter the interactive shell
set -a
A=FOO
export B=BAR
bash -c "$SHELL"
source /tmp/var
echo $A
echo $B
$ test.bash
$ echo "export A=alice" > /tmp/var
$ echo "export B=bob" >> /tmp/var
$ exit
alice
bob

How to do named command line arguments in Bash Scripting better way?

This is my sample Bash Script example.sh:
#!/bin/bash
# Reading arguments and mapping to respective variables
while [ $# -gt 0 ]; do
if [[ $1 == *"--"* ]]; then
v="${1/--/}"
declare $v
fi
shift
done
# Printing command line arguments through the mapped variables
echo ${arg1}
echo ${arg2}
Now if in terminal I run the bash script as follows:
$ bash ./example.sh "--arg1=value1" "--arg2=value2"
I get the correct output like:
value1
value2
Perfect! Meaning I was able to use the values passed to the arguments --arg1 and --arg2 using the variables ${arg1} and ${arg2} inside the bash script.
I am happy with this solution for now as it serves my purpose, but, anyone can suggest any better solution to use named command line arguments in bash scripts?
You can just use environment variables:
#!/bin/bash
echo "$arg1"
echo "$arg2"
No parsing needed. From the command line:
$ arg1=foo arg2=bar ./example.sh
foo
bar
There's even a shell option to let you put the assignments anywhere, not just before the command:
$ set -k
$ ./example.sh arg1=hello arg2=world
hello
world

Executing a command in a KornShell script

If you are passed in an argument in a KornShell script, and it's also a command like:
ksh argument.ksh "wc -l"
how would you execute this command inside the script? Do you store it in a variable, and then execute it? Also, is there a way to retrieve the standard output/standard error from executing the command inside the script?
Place this inside your argument.ksh script:
echo "Running command $1." ## optional message
eval "$1" ## evaluate "$1" as a whole new command
A better or safer way actually is to use "$#":
echo "Running command $*." ## optional message
"$#"
And pass your arguments like this:
ksh argument.ksh wc -l

Bash script how to execute a command from a variable

I am trying to alter the Bash function below to execute each command argument. But when I run this script, the first echo works as intended, but the second echo that attempts to append to the scratch.txt file does not actually execute. It just gets echo'd into the prompt.
#!/bin/sh
clear
function each(){
while read line; do
for cmd in "$#"; do
cmd=${cmd//%/$line}
printf "%s\n" "$cmd"
$cmd
done
done
}
# pipe in the text file and run both commands
# on each line of the file
cat scratch.txt | each 'echo %' 'echo -e "%" >> "scratch.txt"'
exit 0
How do I get the $cmd variable to execute as a command?
I found the original code from answer 2 here:
Running multiple commands with xargs
You want eval. It's evil. Or at least, dangerous. Read all about it at BashFAQ #48.

How can a ksh script determine the full path to itself, when sourced from another?

How can a script determine it's path when it is sourced by ksh? i.e.
$ ksh ". foo.sh"
I've seen very nice ways of doing this in BASH posted on stackoverflow and elsewhere but haven't yet found a ksh method.
Using "$0" doesn't work. This simply refers to "ksh".
Update: I've tried using the "history" command but that isn't aware of the history outside the current script.
$ cat k.ksh
#!/bin/ksh
. j.ksh
$ cat j.ksh
#!/bin/ksh
a=$(history | tail -1)
echo $a
$ ./k.ksh
270 ./k.ksh
I would want it echo "* ./j.ksh".
If it's the AT&T ksh93, this information is stored in the .sh namespace, in the variable .sh.file.
Example
sourced.sh:
(
echo "Sourced: ${.sh.file}"
)
Invocation:
$ ksh -c '. ./sourced.sh'
Result:
Sourced: /var/tmp/sourced.sh
The .sh.file variable is distinct from $0. While $0 can be ksh or /usr/bin/ksh, or the name of the currently running script, .sh.file will always refer to the file for the current scope.
In an interactive shell, this variable won't even exist:
$ echo ${.sh.file:?}
-ksh: .sh.file: parameter not set
I believe the only portable solution is to override the source command:
source() {
sourced=$1
. "$1"
}
And then use source instead of . (the script name will be in $sourced).
The difference of course between sourcing and forking is that sourcing results in the invoked script being executed within the calling process. Henk showed an elegant solution in ksh93, but if, like me, you're stuck with ksh88 then you need an alternative. I'd rather not change the default ksh method of sourcing by using C-shell syntax, and at work it would be against our coding standards, so creating and using a source() function would be unworkable for me. ps, $0 and $_ are unreliable, so here's an alternative:
$ cat b.sh ; cat c.sh ; ./b.sh
#!/bin/ksh
export SCRIPT=c.sh
. $SCRIPT
echo "PPID: $$"
echo "FORKING c.sh"
./c.sh
If we set the invoked script in a variable, and source it using the variable, that variable will be available to the invoked script, since they are in the same process space.
#!/bin/ksh
arguments=$_
pid=$$
echo "PID:$pid"
command=`ps -o args -p $pid | tail -1`
echo "COMMAND (from ps -o args of the PID): $command"
echo "COMMAND (from c.sh's \$_ ) : $arguments"
echo "\$SCRIPT variable: $SCRIPT"
echo dirname: `dirname $0`
echo ; echo
Output is as follows:
PID:21665
COMMAND (from ps -o args of the PID): /bin/ksh ./b.sh
COMMAND (from c.sh's $_ ) : SCRIPT=c.sh
$SCRIPT variable: c.sh
dirname: .
PPID: 21665
FORKING c.sh
PID:21669
COMMAND (from ps -o args of the PID): /bin/ksh ./c.sh
COMMAND (from c.sh's $_ ) : ./c.sh
$SCRIPT variable: c.sh
dirname: .
So when we set the SCRIPT variable in the caller script, the variable is either accessible from the sourced script's operands, or, in the case of a forked process, the variable along with all other environment variables of the parent process are copied for the child process. In either case, the SCRIPT variable can contain your command and arguments, and will be accessible in the case of both sourcing and forking.
You should find it as last command in the history.

Resources