Print all variables used within a bash script [duplicate] - bash

In my script in bash, there are lot of variables, and I have to make something to save them to file.
My question is how to list all variables declared in my script and get list like this:
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi

set will output the variables, unfortunately it will also output the functions defines as well.
Luckily POSIX mode only outputs the variables:
( set -o posix ; set ) | less
Piping to less, or redirect to where you want the options.
So to get the variables declared in just the script:
( set -o posix ; set ) >/tmp/variables.before
source script
( set -o posix ; set ) >/tmp/variables.after
diff /tmp/variables.before /tmp/variables.after
rm /tmp/variables.before /tmp/variables.after
(Or at least something based on that :-) )

compgen -v
It lists all variables including local ones.
I learned it from Get list of variables whose name matches a certain pattern, and used it in my script.

for i in _ {a..z} {A..Z}; do eval "echo \${!$i#}" ; done | xargs printf "%s\n"
This must print all shell variables names. You can get a list before and after sourcing your file just like with "set" to diff which variables are new (as explained in the other answers). But keep in mind such filtering with diff can filter out some variables that you need but were present before sourcing your file.
In your case, if you know your variables' names start with "VARIABLE", then you can source your script and do:
for var in ${!VARIABLE#}; do
printf "%s%q\n" "$var=" "${!var}"
done
UPDATE: For pure BASH solution (no external commands used):
for i in _ {a..z} {A..Z}; do
for var in `eval echo "\\${!$i#}"`; do
echo $var
# you can test if $var matches some criteria and put it in the file or ignore
done
done

Based on some of the above answers, this worked for me:
before=$(set -o posix; set | sort);
source file:
comm -13 <(printf %s "$before") <(set -o posix; set | sort | uniq)

If you can post-process, (as already mentioned) you might just place a set call at the beginning and end of your script (each to a different file) and do a diff on the two files. Realize that this will still contain some noise.
You can also do this programatically. To limit the output to just your current scope, you would have to implement a wrapper to variable creation. For example
store() {
export ${1}="${*:2}"
[[ ${STORED} =~ "(^| )${1}($| )" ]] || STORED="${STORED} ${1}"
}
store VAR1 abc
store VAR2 bcd
store VAR3 cde
for i in ${STORED}; do
echo "${i}=${!i}"
done
Which yields
VAR1=abc
VAR2=bcd
VAR3=cde

A little late to the party, but here's another suggestion:
#!/bin/bash
set_before=$( set -o posix; set | sed -e '/^_=*/d' )
# create/set some variables
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
set_after=$( set -o posix; unset set_before; set | sed -e '/^_=/d' )
diff <(echo "$set_before") <(echo "$set_after") | sed -e 's/^> //' -e '/^[[:digit:]].*/d'
The diff+sed pipeline command line outputs all script-defined variables in the desired format (as specified in the OP's post):
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c

Here's something similar to the #GinkgoFr answer, but without the problems identified by #Tino or #DejayClayton,
and is more robust than #DouglasLeeder's clever set -o posix bit:
+ function SOLUTION() { (set +o posix; set) | sed -ne '/^\w\+=/!q; p;'; }
The difference is that this solution STOPS after the first non-variable report, e.g. the first function reported by set
BTW: The "Tino" problem is solved. Even though POSIX is turned off and functions are reported by set,
the sed ... portion of the solution only allows variable reports through (e.g. VAR=VALUE lines).
In particular, the A2 does not spuriously make it into the output.
+ function a() { echo $'\nA2=B'; }; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A9=999
AND: The "DejayClayton" problem is solved (embedded newlines in variable values do not disrupt the output - each VAR=VALUE get a single output line):
+ A1=$'111\nA2=222'; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A1=$'111\nA2=222'
A9=999
NOTE: The solution provided by #DouglasLeeder suffers from the "DejayClayton" problem (values with embedded newlines).
Below, the A1 is wrong and A2 should not show at all.
$ A1=$'111\nA2=222'; A0=000; A9=999; (set -o posix; set) | grep '^A[0-9]='
A0=000
A1='111
A2=222'
A9=999
FINALLY: I don't think the version of bash matters, but it might. I did my testing / developing on this one:
$ bash --version
GNU bash, version 4.4.12(1)-release (x86_64-pc-msys)
POST-SCRIPT: Given some of the other responses to the OP, I'm left < 100% sure that set always converts newlines within the value to \n, which this solution relies upon to avoid the "DejayClayton" problem. Perhaps that's a modern behavior? Or a compile-time variation? Or a set -o or shopt option setting? If you know of such variations, please add a comment...

If you're only concerned with printing a list of variables with static values (i.e. expansion doesn't work in this case) then another option would be to add start and end markers to your file that tell you where your block of static variable definitions is, e.g.
#!/bin/bash
# some code
# region variables
VAR1=FOO
VAR2=BAR
# endregion
# more code
Then you can just print that part of the file.
Here's something I whipped up for that:
function show_configuration() {
local START_LINE=$(( $(< "$0" grep -m 1 -n "region variables" | cut -d: -f1) + 1 ))
local END_LINE=$(( $(< "$0" grep -m 1 -n "endregion" | cut -d: -f1) - 1 ))
< "$0" awk "${START_LINE} <= NR && NR <= ${END_LINE}"
}
First, note that the block of variables resides in the same file this function is in, so I can use $0 to access the contents of the file.
I use "region" markers to separate different regions of code. So I simply grep for the "variable" region marker (first match: grep -m 1) and let grep prefix the line number (grep -n). Then I have to cut the line number from the match output (splitting on :). Lastly, add or subtract 1 because I don't want the markers to be part of the output.
Now, to print that range of the file I use awk with line number conditions.

Try using a script (lets call it "ls_vars"):
#!/bin/bash
set -a
env > /tmp/a
source $1
env > /tmp/b
diff /tmp/{a,b} | sed -ne 's/^> //p'
chmod +x it, and:
ls_vars your-script.sh > vars.files.save

From a security perspective, either #akostadinov's answer or #JuvenXu's answer is preferable to relying upon the unstructured output of the set command, due to the following potential security flaw:
#!/bin/bash
function doLogic()
{
local COMMAND="${1}"
if ( set -o posix; set | grep -q '^PS1=' )
then
echo 'Script is interactive'
else
echo 'Script is NOT interactive'
fi
}
doLogic 'hello' # Script is NOT interactive
doLogic $'\nPS1=' # Script is interactive
The above function doLogic uses set to check for the presence of variable PS1 to determine if the script is interactive or not (never mind if this is the best way to accomplish that goal; this is just an example.)
However, the output of set is unstructured, which means that any variable that contains a newline can totally contaminate the results.
This, of course, is a potential security risk. Instead, use either Bash's support for indirect variable name expansion, or compgen -v.

Try this : set | egrep "^\w+=" (with or without the | less piping)
The first proposed solution, ( set -o posix ; set ) | less, works but has a drawback: it transmits control codes to the terminal, so they are not displayed properly. So for example, if there is (likely) a IFS=$' \t\n' variable, we can see:
IFS='
'
…instead.
My egrep solution displays this (and eventually other similars ones) properly.

I probably have stolen the answer while ago ... anyway slightly different as a func:
##
# usage source bin/nps-bash-util-funcs
# doEchoVars
doEchoVars(){
# if the tmp dir does not exist
test -z ${tmp_dir} && \
export tmp_dir="$(cd "$(dirname $0)/../../.."; pwd)""/dat/log/.tmp.$$" && \
mkdir -p "$tmp_dir" && \
( set -o posix ; set )| sort >"$tmp_dir/.vars.before"
( set -o posix ; set ) | sort >"$tmp_dir/.vars.after"
cmd="$(comm -3 $tmp_dir/.vars.before $tmp_dir/.vars.after | perl -ne 's#\s+##g;print "\n $_ "' )"
echo -e "$cmd"
}

The printenv command:
printenv prints all environment variables along with their values.
Good Luck...

Simple way to do this is to use bash strict mode by setting system environment variables before running your script and to use diff to only sort the ones of your script :
# Add this line at the top of your script :
set > /tmp/old_vars.log
# Add this line at the end of your script :
set > /tmp/new_vars.log
# Alternatively you can remove unwanted variables with grep (e.g., passwords) :
set | grep -v "PASSWORD1=\|PASSWORD2=\|PASSWORD3=" > /tmp/new_vars.log
# Now you can compare to sort variables of your script :
diff /tmp/old_vars.log /tmp/new_vars.log | grep "^>" > /tmp/script_vars.log
You can now retrieve variables of your script in /tmp/script_vars.log.
Or at least something based on that!

TL;DR
With: typeset -m <GLOBPATH>
$ VARIABLE1=abc
$ VARIABLE2=def
$ VARIABLE3=ghi
$ noglob typeset -m VARIABLE*
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
¹ documentation for typeset can be found in man zshbuiltins, or man zshall.

Related

How to process files in a directory in sorted name order by `en_US.utf8`

I'd like to write a bash script that execute the files in a directory in sorted name order. Specifically, the name order is by en_US.utf8.
For example, if the dir /mydir has the following files:
a1.txt
a2.txt
b1.txt
c1.txt
I have a function like this:
show_content() {
for f in "$#"; do
echo $f
cat $f
done
}
and run it by
show_content /mydir/*
How can I improve the code to make sure that the show_content function execute these files in /mydir the name order? How to specify if I want it to follow en_US.utf8?
Is there a way to ensure that the en_US.utf8 is just effective for this bash script? Per #CharlesDuffy and #oguzismail's answer that we can change the setting for LC_COLLATE to en_US.utf8, but could it be done more "locally" without touching the global settings?
The environment variable LC_COLLATE determines which locale's collation rules should be used for sorting.
To make glob expressions within your script use en_US.utf8, you need only add:
export LC_COLLATE=en_US.utf8
before the glob expression is reached in your script. This change is local: It only changes the script itself and subprocesses it runs.
By contrast, if you needed to re-sort arguments that were passed in from the outside, doing that reliably (using a sort with GNU extensions and a sufficiently modern bash release) might look like:
readarray -0 -t args < <(
printf '%s\0' "$#" | LC_COLLATE=en_US.utf8 sort -z
)
set -- "${args[#]}"

Bash for loop doesn't execute one sentence more that once [duplicate]

I have a directory with about 2000 files. How can I select a random sample of N files through using either a bash script or a list of piped commands?
Here's a script that uses GNU sort's random option:
ls |sort -R |tail -$N |while read file; do
# Something involving $file, or you can leave
# off the while to just get the filenames
done
You can use shuf (from the GNU coreutils package) for that. Just feed it a list of file names and ask it to return the first line from a random permutation:
ls dirname | shuf -n 1
# probably faster and more flexible:
find dirname -type f | shuf -n 1
# etc..
Adjust the -n, --head-count=COUNT value to return the number of wanted lines. For example to return 5 random filenames you would use:
find dirname -type f | shuf -n 5
Here are a few possibilities that don't parse the output of ls and that are 100% safe regarding files with spaces and funny symbols in their name. All of them will populate an array randf with a list of random files. This array is easily printed with printf '%s\n' "${randf[#]}" if needed.
This one will possibly output the same file several times, and N needs to be known in advance. Here I chose N=42.
a=( * )
randf=( "${a[RANDOM%${#a[#]}]"{1..42}"}" )
This feature is not very well documented.
If N is not known in advance, but you really liked the previous possibility, you can use eval. But it's evil, and you must really make sure that N doesn't come directly from user input without being thoroughly checked!
N=42
a=( * )
eval randf=( \"\${a[RANDOM%\${#a[#]}]\"\{1..$N\}\"}\" )
I personally dislike eval and hence this answer!
The same using a more straightforward method (a loop):
N=42
a=( * )
randf=()
for((i=0;i<N;++i)); do
randf+=( "${a[RANDOM%${#a[#]}]}" )
done
If you don't want to possibly have several times the same file:
N=42
a=( * )
randf=()
for((i=0;i<N && ${#a[#]};++i)); do
((j=RANDOM%${#a[#]}))
randf+=( "${a[j]}" )
a=( "${a[#]:0:j}" "${a[#]:j+1}" )
done
Note. This is a late answer to an old post, but the accepted answer links to an external page that shows terrible bash practice, and the other answer is not much better as it also parses the output of ls. A comment to the accepted answer points to an excellent answer by Lhunath which obviously shows good practice, but doesn't exactly answer the OP.
ls | shuf -n 10 # ten random files
A simple solution for selecting 5 random files while avoiding to parse ls. It also works with files containing spaces, newlines and other special characters:
shuf -ezn 5 * | xargs -0 -n1 echo
Replace echo with the command you want to execute for your files.
This is an even later response to #gniourf_gniourf's late answer, which I just upvoted because it's by far the best answer, twice over. (Once for avoiding eval and once for safe filename handling.)
But it took me a few minutes to untangle the "not very well documented" feature(s) this answer uses. If your Bash skills are solid enough that you saw immediately how it works, then skip this comment. But I didn't, and having untangled it I think it's worth explaining.
Feature #1 is the shell's own file globbing. a=(*) creates an array, $a, whose members are the files in the current directory. Bash understands all the weirdnesses of filenames, so that list is guaranteed correct, guaranteed escaped, etc. No need to worry about properly parsing textual file names returned by ls.
Feature #2 is Bash parameter expansions for arrays, one nested within another. This starts with ${#ARRAY[#]}, which expands to the length of $ARRAY.
That expansion is then used to subscript the array. The standard way to find a random number between 1 and N is to take the value of random number modulo N. We want a random number between 0 and the length of our array. Here's the approach, broken into two lines for clarity's sake:
LENGTH=${#ARRAY[#]}
RANDOM=${a[RANDOM%$LENGTH]}
But this solution does it in a single line, removing the unnecessary variable assignment.
Feature #3 is Bash brace expansion, although I have to confess I don't entirely understand it. Brace expansion is used, for instance, to generate a list of 25 files named filename1.txt, filename2.txt, etc: echo "filename"{1..25}".txt".
The expression inside the subshell above, "${a[RANDOM%${#a[#]}]"{1..42}"}", uses that trick to produce 42 separate expansions. The brace expansion places a single digit in between the ] and the }, which at first I thought was subscripting the array, but if so it would be preceded by a colon. (It would also have returned 42 consecutive items from a random spot in the array, which is not at all the same thing as returning 42 random items from the array.) I think it's just making the shell run the expansion 42 times, thereby returning 42 random items from the array. (But if someone can explain it more fully, I'd love to hear it.)
The reason N has to be hardcoded (to 42) is that brace expansion happens before variable expansion.
Finally, here's Feature #4, if you want to do this recursively for a directory hierarchy:
shopt -s globstar
a=( ** )
This turns on a shell option that causes ** to match recursively. Now your $a array contains every file in the entire hierarchy.
If you have Python installed (works with either Python 2 or Python 3):
To select one file (or line from an arbitrary command), use
ls -1 | python -c "import sys; import random; print(random.choice(sys.stdin.readlines()).rstrip())"
To select N files/lines, use (note N is at the end of the command, replace this by a number)
ls -1 | python -c "import sys; import random; print(''.join(random.sample(sys.stdin.readlines(), int(sys.argv[1]))).rstrip())" N
If you want to copy a sample of those files to another folder:
ls | shuf -n 100 | xargs -I % cp % ../samples/
make samples directory first obviously.
MacOS does not have the sort -R and shuf commands, so I needed a bash only solution that randomizes all files without duplicates and did not find that here. This solution is similar to gniourf_gniourf's solution #4, but hopefully adds better comments.
The script should be easy to modify to stop after N samples using a counter with if, or gniourf_gniourf's for loop with N. $RANDOM is limited to ~32000 files, but that should do for most cases.
#!/bin/bash
array=(*) # this is the array of files to shuffle
# echo ${array[#]}
for dummy in "${array[#]}"; do # do loop length(array) times; once for each file
length=${#array[#]}
randomi=$(( $RANDOM % $length )) # select a random index
filename=${array[$randomi]}
echo "Processing: '$filename'" # do something with the file
unset -v "array[$randomi]" # set the element at index $randomi to NULL
array=("${array[#]}") # remove NULL elements introduced by unset; copy array
done
If you have more files in your folder, you can use the below piped command I found in unix stackexchange.
find /some/dir/ -type f -print0 | xargs -0 shuf -e -n 8 -z | xargs -0 cp -vt /target/dir/
Here I wanted to copy the files, but if you want to move files or do something else, just change the last command where I have used cp.
This is the only script I can get to play nice with bash on MacOS. I combined and edited snippets from the following two links:
ls command: how can I get a recursive full-path listing, one line per file?
http://www.linuxquestions.org/questions/linux-general-1/is-there-a-bash-command-for-picking-a-random-file-678687/
#!/bin/bash
# Reads a given directory and picks a random file.
# The directory you want to use. You could use "$1" instead if you
# wanted to parametrize it.
DIR="/path/to/"
# DIR="$1"
# Internal Field Separator set to newline, so file names with
# spaces do not break our script.
IFS='
'
if [[ -d "${DIR}" ]]
then
# Runs ls on the given dir, and dumps the output into a matrix,
# it uses the new lines character as a field delimiter, as explained above.
# file_matrix=($(ls -LR "${DIR}"))
file_matrix=($(ls -R $DIR | awk '; /:$/&&f{s=$0;f=0}; /:$/&&!f{sub(/:$/,"");s=$0;f=1;next}; NF&&f{ print s"/"$0 }'))
num_files=${#file_matrix[*]}
# This is the command you want to run on a random file.
# Change "ls -l" by anything you want, it's just an example.
ls -l "${file_matrix[$((RANDOM%num_files))]}"
fi
exit 0
I use this: it uses temporary file but goes deeply in a directory until it find a regular file and return it.
# find for a quasi-random file in a directory tree:
# directory to start search from:
ROOT="/";
tmp=/tmp/mytempfile
TARGET="$ROOT"
FILE="";
n=
r=
while [ -e "$TARGET" ]; do
TARGET="$(readlink -f "${TARGET}/$FILE")" ;
if [ -d "$TARGET" ]; then
ls -1 "$TARGET" 2> /dev/null > $tmp || break;
n=$(cat $tmp | wc -l);
if [ $n != 0 ]; then
FILE=$(shuf -n 1 $tmp)
# or if you dont have/want to use shuf:
# r=$(($RANDOM % $n)) ;
# FILE=$(tail -n +$(( $r + 1 )) $tmp | head -n 1);
fi ;
else
if [ -f "$TARGET" ] ; then
rm -f $tmp
echo $TARGET
break;
else
# is not a regular file, restart:
TARGET="$ROOT"
FILE=""
fi
fi
done;
How about a Perl solution slightly doctored from Mr. Kang over here:
How can I shuffle the lines of a text file on the Unix command line or in a shell script?
$ ls | perl -MList::Util=shuffle -e '#lines = shuffle(<>); print
#lines[0..4]'

Would a "shell function" or "alias" be appropriate for this use

I'm currently trying to create an alias or shell function which I can run to check my battery life, in attempts to familiarize myself with aliases and bash. I have run into a problem where, I'm not receiving any feedback from my command and can not verify if it's working or if there are any steps i have left out that will give me my desired result.
Current .bashrc alias:
alias battery='upower -i $(upower -e | grep -e 'BAT'| grep -E "state|to\ full|percentage")'
Desired use:
b#localhost:~$ battery
Desired result:
state: discharging Time to empty: x.x Hours percentage: xx%
I have read the bash references for something that might help me here. I wasn't able to find anything that I think applies here. Thanks for your consideration!
As #bannji already announced in a comment, he has fixed his command.
Old incorrect alias
'upower -i $(upower -e | grep -e 'BAT'| grep -E "state|to\ full|percentage")'
New correct alias
'upower -i $(upower -e | grep -e "BAT") | grep -E "state|to\ full|percentage"'
Most comments were talking about the interpretation of the quotes. That was not the problem here. The main difference is where the subcommand is closed. In the first case the subcommand is closed after the last grep, su upower -i gets nothing.
In the second command the second grep will filter the output of upower -i.
The difference in quotes is interesting in an other example.
addone() {
((sum=$1+1))
echo "${sum}"
}
i=1
alias battery='addone $(addone $i)'
i=4
battery
# other alias
i=1
alias battery2='addone $(addone '$i')'
i=4
battery2
Both battery commands will try to add 2 to the value of $i, but will give different results.
The command battery will add 2 to the current value 4 of $i, resulting in 6.
The command battery2 will add 2 to the value of $i at the moment that the alias was defined, resulting in 3.
Why?
In battery2 the string $i is surrounded by single quotes, but those single quotes are inside other ones. The result is that $i is evaluated and the alias is defined as
alias battery2='addone $(addone 2)'

Prevent a command being executed from a source'd file in Bash

For security purposes, how can I prevent a command being executed in a file that is source'd?
For example:
#!/bin/sh
source file.cfg
Wanted:
get="value"
Unintended:
command
You could use a mechanism like in Python. Define variables and/or functions and put executable commands into a conditional block:
#!/bin/bash
# Variables and functions comes here
a=1
b=2
function foo() {
echo "bar"
}
# Put executable commands here
if [ "$0" = "$BASH_SOURCE" ] ; then
foo
fi
If you chmod +x the file and run it or run it through bash file.sh the executable commands in the conditional statement will get executed. If you source the file only variables and functions will get imported.
Long story short, you can't. We could debate how to try to prevent some commands from being executed but if security is the major concern here, source is a no-go. You are looking for a proper configuration facility — while source is intended to execute code.
For example, the following code provides a trivial key-value configuration file parsing:
while read -r x; do
declare +x -- "${x}"
done < file.cfg
But this is both far from the flexibility source gives you, and it is far from perfectly secure solution either. It doesn't handle any specific escaping, multi-line variables, comments… and it also doesn't filter the assigned variables, so the config can override your precious variables. The extra +x argument to declare ensures that the config file at least won't modify environment exported to programs.
If you really want to go this route, you can try to improve this. But if you are really worried about security, you should think twice before using shell script at all. Writing proper shell script is not trivial at all, and it is full of pitfalls.
Something basic, might work:
name="$(sed -n 1p < source_file | grep -o 'name="[^"]*' | grep -o '[^"]*$')"
lastname="$(sed -n 2p < source_file | grep -o 'lastname="[^"]*' | grep -o '[^"]*$')"
age="$(sed -n 3p < source_file | grep -o 'age="[^"]*' | grep -o '[^"]*$')"
Next, check the parameters if they meet certain standards for example if it matches a name of a database ($LIST_NAMES) or if you have a certain amount of character string, ect.
if ! grep -Fox "$name" <<<"$LIST_NAMES"; then exit 1; fi
if [ $(wc -c <<<"$age") -gt 3 ]; then exit 1; fi
then taken only the lines useful to prevent the rest.
head -n3 < source_file > source_file.tmp
source 'source_file.tmp'

How to list variables declared in script in bash?

In my script in bash, there are lot of variables, and I have to make something to save them to file.
My question is how to list all variables declared in my script and get list like this:
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
set will output the variables, unfortunately it will also output the functions defines as well.
Luckily POSIX mode only outputs the variables:
( set -o posix ; set ) | less
Piping to less, or redirect to where you want the options.
So to get the variables declared in just the script:
( set -o posix ; set ) >/tmp/variables.before
source script
( set -o posix ; set ) >/tmp/variables.after
diff /tmp/variables.before /tmp/variables.after
rm /tmp/variables.before /tmp/variables.after
(Or at least something based on that :-) )
compgen -v
It lists all variables including local ones.
I learned it from Get list of variables whose name matches a certain pattern, and used it in my script.
for i in _ {a..z} {A..Z}; do eval "echo \${!$i#}" ; done | xargs printf "%s\n"
This must print all shell variables names. You can get a list before and after sourcing your file just like with "set" to diff which variables are new (as explained in the other answers). But keep in mind such filtering with diff can filter out some variables that you need but were present before sourcing your file.
In your case, if you know your variables' names start with "VARIABLE", then you can source your script and do:
for var in ${!VARIABLE#}; do
printf "%s%q\n" "$var=" "${!var}"
done
UPDATE: For pure BASH solution (no external commands used):
for i in _ {a..z} {A..Z}; do
for var in `eval echo "\\${!$i#}"`; do
echo $var
# you can test if $var matches some criteria and put it in the file or ignore
done
done
Based on some of the above answers, this worked for me:
before=$(set -o posix; set | sort);
source file:
comm -13 <(printf %s "$before") <(set -o posix; set | sort | uniq)
If you can post-process, (as already mentioned) you might just place a set call at the beginning and end of your script (each to a different file) and do a diff on the two files. Realize that this will still contain some noise.
You can also do this programatically. To limit the output to just your current scope, you would have to implement a wrapper to variable creation. For example
store() {
export ${1}="${*:2}"
[[ ${STORED} =~ "(^| )${1}($| )" ]] || STORED="${STORED} ${1}"
}
store VAR1 abc
store VAR2 bcd
store VAR3 cde
for i in ${STORED}; do
echo "${i}=${!i}"
done
Which yields
VAR1=abc
VAR2=bcd
VAR3=cde
A little late to the party, but here's another suggestion:
#!/bin/bash
set_before=$( set -o posix; set | sed -e '/^_=*/d' )
# create/set some variables
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
set_after=$( set -o posix; unset set_before; set | sed -e '/^_=/d' )
diff <(echo "$set_before") <(echo "$set_after") | sed -e 's/^> //' -e '/^[[:digit:]].*/d'
The diff+sed pipeline command line outputs all script-defined variables in the desired format (as specified in the OP's post):
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
Here's something similar to the #GinkgoFr answer, but without the problems identified by #Tino or #DejayClayton,
and is more robust than #DouglasLeeder's clever set -o posix bit:
+ function SOLUTION() { (set +o posix; set) | sed -ne '/^\w\+=/!q; p;'; }
The difference is that this solution STOPS after the first non-variable report, e.g. the first function reported by set
BTW: The "Tino" problem is solved. Even though POSIX is turned off and functions are reported by set,
the sed ... portion of the solution only allows variable reports through (e.g. VAR=VALUE lines).
In particular, the A2 does not spuriously make it into the output.
+ function a() { echo $'\nA2=B'; }; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A9=999
AND: The "DejayClayton" problem is solved (embedded newlines in variable values do not disrupt the output - each VAR=VALUE get a single output line):
+ A1=$'111\nA2=222'; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A1=$'111\nA2=222'
A9=999
NOTE: The solution provided by #DouglasLeeder suffers from the "DejayClayton" problem (values with embedded newlines).
Below, the A1 is wrong and A2 should not show at all.
$ A1=$'111\nA2=222'; A0=000; A9=999; (set -o posix; set) | grep '^A[0-9]='
A0=000
A1='111
A2=222'
A9=999
FINALLY: I don't think the version of bash matters, but it might. I did my testing / developing on this one:
$ bash --version
GNU bash, version 4.4.12(1)-release (x86_64-pc-msys)
POST-SCRIPT: Given some of the other responses to the OP, I'm left < 100% sure that set always converts newlines within the value to \n, which this solution relies upon to avoid the "DejayClayton" problem. Perhaps that's a modern behavior? Or a compile-time variation? Or a set -o or shopt option setting? If you know of such variations, please add a comment...
If you're only concerned with printing a list of variables with static values (i.e. expansion doesn't work in this case) then another option would be to add start and end markers to your file that tell you where your block of static variable definitions is, e.g.
#!/bin/bash
# some code
# region variables
VAR1=FOO
VAR2=BAR
# endregion
# more code
Then you can just print that part of the file.
Here's something I whipped up for that:
function show_configuration() {
local START_LINE=$(( $(< "$0" grep -m 1 -n "region variables" | cut -d: -f1) + 1 ))
local END_LINE=$(( $(< "$0" grep -m 1 -n "endregion" | cut -d: -f1) - 1 ))
< "$0" awk "${START_LINE} <= NR && NR <= ${END_LINE}"
}
First, note that the block of variables resides in the same file this function is in, so I can use $0 to access the contents of the file.
I use "region" markers to separate different regions of code. So I simply grep for the "variable" region marker (first match: grep -m 1) and let grep prefix the line number (grep -n). Then I have to cut the line number from the match output (splitting on :). Lastly, add or subtract 1 because I don't want the markers to be part of the output.
Now, to print that range of the file I use awk with line number conditions.
Try using a script (lets call it "ls_vars"):
#!/bin/bash
set -a
env > /tmp/a
source $1
env > /tmp/b
diff /tmp/{a,b} | sed -ne 's/^> //p'
chmod +x it, and:
ls_vars your-script.sh > vars.files.save
From a security perspective, either #akostadinov's answer or #JuvenXu's answer is preferable to relying upon the unstructured output of the set command, due to the following potential security flaw:
#!/bin/bash
function doLogic()
{
local COMMAND="${1}"
if ( set -o posix; set | grep -q '^PS1=' )
then
echo 'Script is interactive'
else
echo 'Script is NOT interactive'
fi
}
doLogic 'hello' # Script is NOT interactive
doLogic $'\nPS1=' # Script is interactive
The above function doLogic uses set to check for the presence of variable PS1 to determine if the script is interactive or not (never mind if this is the best way to accomplish that goal; this is just an example.)
However, the output of set is unstructured, which means that any variable that contains a newline can totally contaminate the results.
This, of course, is a potential security risk. Instead, use either Bash's support for indirect variable name expansion, or compgen -v.
Try this : set | egrep "^\w+=" (with or without the | less piping)
The first proposed solution, ( set -o posix ; set ) | less, works but has a drawback: it transmits control codes to the terminal, so they are not displayed properly. So for example, if there is (likely) a IFS=$' \t\n' variable, we can see:
IFS='
'
…instead.
My egrep solution displays this (and eventually other similars ones) properly.
I probably have stolen the answer while ago ... anyway slightly different as a func:
##
# usage source bin/nps-bash-util-funcs
# doEchoVars
doEchoVars(){
# if the tmp dir does not exist
test -z ${tmp_dir} && \
export tmp_dir="$(cd "$(dirname $0)/../../.."; pwd)""/dat/log/.tmp.$$" && \
mkdir -p "$tmp_dir" && \
( set -o posix ; set )| sort >"$tmp_dir/.vars.before"
( set -o posix ; set ) | sort >"$tmp_dir/.vars.after"
cmd="$(comm -3 $tmp_dir/.vars.before $tmp_dir/.vars.after | perl -ne 's#\s+##g;print "\n $_ "' )"
echo -e "$cmd"
}
The printenv command:
printenv prints all environment variables along with their values.
Good Luck...
Simple way to do this is to use bash strict mode by setting system environment variables before running your script and to use diff to only sort the ones of your script :
# Add this line at the top of your script :
set > /tmp/old_vars.log
# Add this line at the end of your script :
set > /tmp/new_vars.log
# Alternatively you can remove unwanted variables with grep (e.g., passwords) :
set | grep -v "PASSWORD1=\|PASSWORD2=\|PASSWORD3=" > /tmp/new_vars.log
# Now you can compare to sort variables of your script :
diff /tmp/old_vars.log /tmp/new_vars.log | grep "^>" > /tmp/script_vars.log
You can now retrieve variables of your script in /tmp/script_vars.log.
Or at least something based on that!
TL;DR
With: typeset -m <GLOBPATH>
$ VARIABLE1=abc
$ VARIABLE2=def
$ VARIABLE3=ghi
$ noglob typeset -m VARIABLE*
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
¹ documentation for typeset can be found in man zshbuiltins, or man zshall.

Resources