For security purposes, how can I prevent a command being executed in a file that is source'd?
For example:
#!/bin/sh
source file.cfg
Wanted:
get="value"
Unintended:
command
You could use a mechanism like in Python. Define variables and/or functions and put executable commands into a conditional block:
#!/bin/bash
# Variables and functions comes here
a=1
b=2
function foo() {
echo "bar"
}
# Put executable commands here
if [ "$0" = "$BASH_SOURCE" ] ; then
foo
fi
If you chmod +x the file and run it or run it through bash file.sh the executable commands in the conditional statement will get executed. If you source the file only variables and functions will get imported.
Long story short, you can't. We could debate how to try to prevent some commands from being executed but if security is the major concern here, source is a no-go. You are looking for a proper configuration facility — while source is intended to execute code.
For example, the following code provides a trivial key-value configuration file parsing:
while read -r x; do
declare +x -- "${x}"
done < file.cfg
But this is both far from the flexibility source gives you, and it is far from perfectly secure solution either. It doesn't handle any specific escaping, multi-line variables, comments… and it also doesn't filter the assigned variables, so the config can override your precious variables. The extra +x argument to declare ensures that the config file at least won't modify environment exported to programs.
If you really want to go this route, you can try to improve this. But if you are really worried about security, you should think twice before using shell script at all. Writing proper shell script is not trivial at all, and it is full of pitfalls.
Something basic, might work:
name="$(sed -n 1p < source_file | grep -o 'name="[^"]*' | grep -o '[^"]*$')"
lastname="$(sed -n 2p < source_file | grep -o 'lastname="[^"]*' | grep -o '[^"]*$')"
age="$(sed -n 3p < source_file | grep -o 'age="[^"]*' | grep -o '[^"]*$')"
Next, check the parameters if they meet certain standards for example if it matches a name of a database ($LIST_NAMES) or if you have a certain amount of character string, ect.
if ! grep -Fox "$name" <<<"$LIST_NAMES"; then exit 1; fi
if [ $(wc -c <<<"$age") -gt 3 ]; then exit 1; fi
then taken only the lines useful to prevent the rest.
head -n3 < source_file > source_file.tmp
source 'source_file.tmp'
Related
i trying to make a script to organize a pair of list i have, and process with other programs, but im a little bit stuck now.
I want from a List in Txt process every line first creating a folder to each line in the list and then process due to different scripts i have.
But my problem is is the list i give to the script is like 3-4 elements works great and create there own directory, but if i put a list with +1000 lines, then my script process only a few elements thru the scripts.
EDIT: the process are like 30-35 scripts, different language python,bash,python and golang
Any suggestions?
cat $STORES+NEW.txt | while read NEWSTORES
do
cd $STORES && mkdir $NEWSTORES && cd $NEWSTORES && mkdir .Files
python3 checkstatus.py -n $NEWSTORES
checkemployes $NEWSTORES -status
storemanagers -s $NEWSTORES -o $NEWSTORES+managers.txt
curl -s https://redacted.com/store?=$NEWSTORES | grep -vE "<|^[\*]*[\.]*$NEWSTORES" | sort -u | awk 'NF' > $NEWSTORES+site.txt
..
..
..
..
..
..
cd ../..
done
I'm not supposed to give an answer yet but I mistakenly answered my what should be a comment reply. Anyway here a few things I can suggest:
Avoid unnecessary use of cat.
Open your input file using another FD to prevent commands that read input inside the loop from eating the input: IFS= read -ru 3 NEWSTORES; do ...; done 3< "$STORES+NEW.txt" or { IFS= read -ru "$FD" NEWSTORES; do ...; done; } {FD}< "$STORES+NEW.txt". Also see https://stackoverflow.com/a/28837793/445221.
Not completely related but don't use while loop in a pipeline since it will execute in a subshell. In the future if you try to alter a variable and expect it to be saved outside the loop, it won't. You can use lastpipe to avoid it but it's unnecessary most of the time.
Place your variable expansions around double quotes to prevent unwanted word splitting and filename expansion.
Use -r option unless you want backslashes to escape characters.
Specify IFS= before read to prevent stripping of leading and trailing spaces.
Using readarray or mapfile makes it more convenient: readarray -t ALL_STORES_DATA < "$STORES+NEW.txt"; for NEWSTORES IN "${ALL_STORES_DATA[#]}"; do ...; done
Use lowercase characters on your variables when you don't use them in a global manner to avoid conflict with bash's variables.
I'm facing some problems to pass some environment parameters to docker run in a relatively generic way.
Our first iteration was to load a .env file into the environment via these lines:
set -o allexport;
. "${PROJECT_DIR}/.env";
set +o allexport;
And then manually typing the --env VARNAME=$VARNAME as options for the docker run command. But this can be quite annoying when you have dozens of variables.
Then we tried to just pass the file, with --env-file .env, and it seems to work, but it doesn't, because it does not play well with quotes around the variable values.
Here is where I started doing crazy/ugly things. The basic idea was to do something like:
set_docker_parameters()
{
grep -v '^$' "${PROJECT_DIR}/.env" | while IFS= read -r LINE; do
printf " -e %s" "${LINE}"
done
}
docker run $(set_docker_parameters) --rm image:label command
Where the parsed lines are like VARIABLE="value", VARIABLE='value', or VARIABLE=value. Blank lines are discarded by the piped grep.
But docker run complains all the time about not being called properly. When I expand the result of set_docker_parameters I get what I expected, and when I copy its result and replace $(set_docker_parameters), then docker run works as expected too, flawless.
Any idea on what I'm doing wrong here?
Thank you very much!
P.S.: I'm trying to make my script 100% POSIX-compatible, so I'll prefer any solution that does not rely on Bash-specific features.
Based on the comments of #jordanm I devised the following solution:
docker_run_wrapper()
{
# That's not ideal, but in any case it's not directly related to the question.
cmd=$1
set --; # Unset all positional arguments ($# will be emptied)
# We don't have arrays (we want to be POSIX compatible), so we'll
# use $# as a sort of substitute, appending new values to it.
grep -v '^$' "${PROJECT_DIR}/.env" | while IFS= read -r LINE; do
set -- "$#" "--env";
set -- "$#" "${LINE}";
done
# We use $# in a clearly non-standard way, just to expand the values
# coming from the .env file.
docker run "$#" "image:label" /bin/sh -c "${cmd}";
}
Then again, this is not the code I wrote for my particular use case, but a simplification that shows the basic idea. If you can rely on having Bash, then it could be much cleaner, by not overloading $# and using arrays.
In my script in bash, there are lot of variables, and I have to make something to save them to file.
My question is how to list all variables declared in my script and get list like this:
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
set will output the variables, unfortunately it will also output the functions defines as well.
Luckily POSIX mode only outputs the variables:
( set -o posix ; set ) | less
Piping to less, or redirect to where you want the options.
So to get the variables declared in just the script:
( set -o posix ; set ) >/tmp/variables.before
source script
( set -o posix ; set ) >/tmp/variables.after
diff /tmp/variables.before /tmp/variables.after
rm /tmp/variables.before /tmp/variables.after
(Or at least something based on that :-) )
compgen -v
It lists all variables including local ones.
I learned it from Get list of variables whose name matches a certain pattern, and used it in my script.
for i in _ {a..z} {A..Z}; do eval "echo \${!$i#}" ; done | xargs printf "%s\n"
This must print all shell variables names. You can get a list before and after sourcing your file just like with "set" to diff which variables are new (as explained in the other answers). But keep in mind such filtering with diff can filter out some variables that you need but were present before sourcing your file.
In your case, if you know your variables' names start with "VARIABLE", then you can source your script and do:
for var in ${!VARIABLE#}; do
printf "%s%q\n" "$var=" "${!var}"
done
UPDATE: For pure BASH solution (no external commands used):
for i in _ {a..z} {A..Z}; do
for var in `eval echo "\\${!$i#}"`; do
echo $var
# you can test if $var matches some criteria and put it in the file or ignore
done
done
Based on some of the above answers, this worked for me:
before=$(set -o posix; set | sort);
source file:
comm -13 <(printf %s "$before") <(set -o posix; set | sort | uniq)
If you can post-process, (as already mentioned) you might just place a set call at the beginning and end of your script (each to a different file) and do a diff on the two files. Realize that this will still contain some noise.
You can also do this programatically. To limit the output to just your current scope, you would have to implement a wrapper to variable creation. For example
store() {
export ${1}="${*:2}"
[[ ${STORED} =~ "(^| )${1}($| )" ]] || STORED="${STORED} ${1}"
}
store VAR1 abc
store VAR2 bcd
store VAR3 cde
for i in ${STORED}; do
echo "${i}=${!i}"
done
Which yields
VAR1=abc
VAR2=bcd
VAR3=cde
A little late to the party, but here's another suggestion:
#!/bin/bash
set_before=$( set -o posix; set | sed -e '/^_=*/d' )
# create/set some variables
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
set_after=$( set -o posix; unset set_before; set | sed -e '/^_=/d' )
diff <(echo "$set_before") <(echo "$set_after") | sed -e 's/^> //' -e '/^[[:digit:]].*/d'
The diff+sed pipeline command line outputs all script-defined variables in the desired format (as specified in the OP's post):
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
Here's something similar to the #GinkgoFr answer, but without the problems identified by #Tino or #DejayClayton,
and is more robust than #DouglasLeeder's clever set -o posix bit:
+ function SOLUTION() { (set +o posix; set) | sed -ne '/^\w\+=/!q; p;'; }
The difference is that this solution STOPS after the first non-variable report, e.g. the first function reported by set
BTW: The "Tino" problem is solved. Even though POSIX is turned off and functions are reported by set,
the sed ... portion of the solution only allows variable reports through (e.g. VAR=VALUE lines).
In particular, the A2 does not spuriously make it into the output.
+ function a() { echo $'\nA2=B'; }; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A9=999
AND: The "DejayClayton" problem is solved (embedded newlines in variable values do not disrupt the output - each VAR=VALUE get a single output line):
+ A1=$'111\nA2=222'; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A1=$'111\nA2=222'
A9=999
NOTE: The solution provided by #DouglasLeeder suffers from the "DejayClayton" problem (values with embedded newlines).
Below, the A1 is wrong and A2 should not show at all.
$ A1=$'111\nA2=222'; A0=000; A9=999; (set -o posix; set) | grep '^A[0-9]='
A0=000
A1='111
A2=222'
A9=999
FINALLY: I don't think the version of bash matters, but it might. I did my testing / developing on this one:
$ bash --version
GNU bash, version 4.4.12(1)-release (x86_64-pc-msys)
POST-SCRIPT: Given some of the other responses to the OP, I'm left < 100% sure that set always converts newlines within the value to \n, which this solution relies upon to avoid the "DejayClayton" problem. Perhaps that's a modern behavior? Or a compile-time variation? Or a set -o or shopt option setting? If you know of such variations, please add a comment...
If you're only concerned with printing a list of variables with static values (i.e. expansion doesn't work in this case) then another option would be to add start and end markers to your file that tell you where your block of static variable definitions is, e.g.
#!/bin/bash
# some code
# region variables
VAR1=FOO
VAR2=BAR
# endregion
# more code
Then you can just print that part of the file.
Here's something I whipped up for that:
function show_configuration() {
local START_LINE=$(( $(< "$0" grep -m 1 -n "region variables" | cut -d: -f1) + 1 ))
local END_LINE=$(( $(< "$0" grep -m 1 -n "endregion" | cut -d: -f1) - 1 ))
< "$0" awk "${START_LINE} <= NR && NR <= ${END_LINE}"
}
First, note that the block of variables resides in the same file this function is in, so I can use $0 to access the contents of the file.
I use "region" markers to separate different regions of code. So I simply grep for the "variable" region marker (first match: grep -m 1) and let grep prefix the line number (grep -n). Then I have to cut the line number from the match output (splitting on :). Lastly, add or subtract 1 because I don't want the markers to be part of the output.
Now, to print that range of the file I use awk with line number conditions.
Try using a script (lets call it "ls_vars"):
#!/bin/bash
set -a
env > /tmp/a
source $1
env > /tmp/b
diff /tmp/{a,b} | sed -ne 's/^> //p'
chmod +x it, and:
ls_vars your-script.sh > vars.files.save
From a security perspective, either #akostadinov's answer or #JuvenXu's answer is preferable to relying upon the unstructured output of the set command, due to the following potential security flaw:
#!/bin/bash
function doLogic()
{
local COMMAND="${1}"
if ( set -o posix; set | grep -q '^PS1=' )
then
echo 'Script is interactive'
else
echo 'Script is NOT interactive'
fi
}
doLogic 'hello' # Script is NOT interactive
doLogic $'\nPS1=' # Script is interactive
The above function doLogic uses set to check for the presence of variable PS1 to determine if the script is interactive or not (never mind if this is the best way to accomplish that goal; this is just an example.)
However, the output of set is unstructured, which means that any variable that contains a newline can totally contaminate the results.
This, of course, is a potential security risk. Instead, use either Bash's support for indirect variable name expansion, or compgen -v.
Try this : set | egrep "^\w+=" (with or without the | less piping)
The first proposed solution, ( set -o posix ; set ) | less, works but has a drawback: it transmits control codes to the terminal, so they are not displayed properly. So for example, if there is (likely) a IFS=$' \t\n' variable, we can see:
IFS='
'
…instead.
My egrep solution displays this (and eventually other similars ones) properly.
I probably have stolen the answer while ago ... anyway slightly different as a func:
##
# usage source bin/nps-bash-util-funcs
# doEchoVars
doEchoVars(){
# if the tmp dir does not exist
test -z ${tmp_dir} && \
export tmp_dir="$(cd "$(dirname $0)/../../.."; pwd)""/dat/log/.tmp.$$" && \
mkdir -p "$tmp_dir" && \
( set -o posix ; set )| sort >"$tmp_dir/.vars.before"
( set -o posix ; set ) | sort >"$tmp_dir/.vars.after"
cmd="$(comm -3 $tmp_dir/.vars.before $tmp_dir/.vars.after | perl -ne 's#\s+##g;print "\n $_ "' )"
echo -e "$cmd"
}
The printenv command:
printenv prints all environment variables along with their values.
Good Luck...
Simple way to do this is to use bash strict mode by setting system environment variables before running your script and to use diff to only sort the ones of your script :
# Add this line at the top of your script :
set > /tmp/old_vars.log
# Add this line at the end of your script :
set > /tmp/new_vars.log
# Alternatively you can remove unwanted variables with grep (e.g., passwords) :
set | grep -v "PASSWORD1=\|PASSWORD2=\|PASSWORD3=" > /tmp/new_vars.log
# Now you can compare to sort variables of your script :
diff /tmp/old_vars.log /tmp/new_vars.log | grep "^>" > /tmp/script_vars.log
You can now retrieve variables of your script in /tmp/script_vars.log.
Or at least something based on that!
TL;DR
With: typeset -m <GLOBPATH>
$ VARIABLE1=abc
$ VARIABLE2=def
$ VARIABLE3=ghi
$ noglob typeset -m VARIABLE*
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
¹ documentation for typeset can be found in man zshbuiltins, or man zshall.
I've been handed a project that consists of several dozen (probably over 100, I haven't counted) bash scripts. Most of the scripts make at least one call to another one of the scripts. I'd like to get the equivalent of a call graph where the nodes are the scripts instead of functions.
Is there any existing software to do this?
If not, does anybody have clever ideas for how to do this?
Best plan I could come up with was to enumerate the scripts and check to see if the basenames are unique (they span multiple directories). If there are duplicate basenames, then cry, because the script paths are usually held in variable names so you may not be able to disambiguate. If they are unique, then grep the names in the scripts and use those results to build up a graph. Use some tool (suggestions?) to visualize the graph.
Suggestions?
Wrap the shell itself by your implementation, log who called you wrapper and exec the original shell.
Yes you have to start the scripts in order to identify which script is really used. Otherwise you need a tool with the same knowledge as the shell engine itself to support the whole variable expansion, PATHs etc -- I never heard about such a tool.
In order to visualize the calling graph use GraphViz's dot format.
Here's how I wound up doing it (disclaimer: a lot of this is hack-ish, so you may want to clean up if you're going to use it long-term)...
Assumptions:
- Current directory contains all scripts/binaries in question.
- Files for building the graph go in subdir call_graph.
Created the script call_graph/make_tgf.sh:
#!/bin/bash
# Run from dir with scripts and subdir call_graph
# Parameters:
# $1 = sources (default is call_graph/sources.txt)
# $2 = targets (default is call_graph/targets.txt)
SOURCES=$1
if [ "$SOURCES" == "" ]; then SOURCES=call_graph/sources.txt; fi
TARGETS=$2
if [ "$TARGETS" == "" ]; then TARGETS=call_graph/targets.txt; fi
if [ ! -d call_graph ]; then echo "Run from parent dir of call_graph" >&2; exit 1; fi
(
# cat call_graph/targets.txt
for file in `cat $SOURCES `
do
for target in `grep -v -E '^ *#' $file | grep -o -F -w -f $TARGETS | grep -v -w $file | sort | uniq`
do echo $file $target
done
done
)
Then, I ran the following (I wound up doing the scripts-only version):
cat /dev/null | tee call_graph/sources.txt > call_graph/targets.txt
for file in *
do
if [ -d "$file" ]; then continue; fi
echo $file >> call_graph/targets.txt
if file $file | grep text >/dev/null; then echo $file >> call_graph/sources.txt; fi
done
# For scripts only:
bash call_graph/make_tgf.sh call_graph/sources.txt call_graph/sources.txt > call_graph/scripts.tgf
# For scripts + binaries (binaries will be leaf nodes):
bash call_graph/make_tgf.sh > call_graph/scripts_and_bin.tgf
I then opened the resulting tgf file in yEd, and had yEd do the layout (Layout -> Hierarchical). I saved as graphml to separate the manually-editable file from the automatically-generated one.
I found that there were certain nodes that were not helpful to have in the graph, such as utility scripts/binaries that were called all over the place. So, I removed these from the sources/targets files and regenerated as necessary until I liked the node set.
Hope this helps somebody...
Insert a line at the beginning of each shell script, after the #! line, which logs a timestamp, the full pathname of the script, and the argument list.
Over time, you can mine this log to identify likely candidates, i.e. two lines logged very close together have a high probability of the first script calling the second.
This also allows you to focus on the scripts which are still actually in use.
You could use an ed script
1a
log blah blah blah
.
wq
and run it like so:
find / -perm +x -exec ed {} <edscript
Make sure you test the find command with -print instead of the exec clause. And / is probably not the path that you want to use. If you have to include bin directories then you will probably need to switch to grep in order to identify the pathnames to include, then when you have a file full of the right names, use xargs instead of find to run the script.
In my script in bash, there are lot of variables, and I have to make something to save them to file.
My question is how to list all variables declared in my script and get list like this:
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
set will output the variables, unfortunately it will also output the functions defines as well.
Luckily POSIX mode only outputs the variables:
( set -o posix ; set ) | less
Piping to less, or redirect to where you want the options.
So to get the variables declared in just the script:
( set -o posix ; set ) >/tmp/variables.before
source script
( set -o posix ; set ) >/tmp/variables.after
diff /tmp/variables.before /tmp/variables.after
rm /tmp/variables.before /tmp/variables.after
(Or at least something based on that :-) )
compgen -v
It lists all variables including local ones.
I learned it from Get list of variables whose name matches a certain pattern, and used it in my script.
for i in _ {a..z} {A..Z}; do eval "echo \${!$i#}" ; done | xargs printf "%s\n"
This must print all shell variables names. You can get a list before and after sourcing your file just like with "set" to diff which variables are new (as explained in the other answers). But keep in mind such filtering with diff can filter out some variables that you need but were present before sourcing your file.
In your case, if you know your variables' names start with "VARIABLE", then you can source your script and do:
for var in ${!VARIABLE#}; do
printf "%s%q\n" "$var=" "${!var}"
done
UPDATE: For pure BASH solution (no external commands used):
for i in _ {a..z} {A..Z}; do
for var in `eval echo "\\${!$i#}"`; do
echo $var
# you can test if $var matches some criteria and put it in the file or ignore
done
done
Based on some of the above answers, this worked for me:
before=$(set -o posix; set | sort);
source file:
comm -13 <(printf %s "$before") <(set -o posix; set | sort | uniq)
If you can post-process, (as already mentioned) you might just place a set call at the beginning and end of your script (each to a different file) and do a diff on the two files. Realize that this will still contain some noise.
You can also do this programatically. To limit the output to just your current scope, you would have to implement a wrapper to variable creation. For example
store() {
export ${1}="${*:2}"
[[ ${STORED} =~ "(^| )${1}($| )" ]] || STORED="${STORED} ${1}"
}
store VAR1 abc
store VAR2 bcd
store VAR3 cde
for i in ${STORED}; do
echo "${i}=${!i}"
done
Which yields
VAR1=abc
VAR2=bcd
VAR3=cde
A little late to the party, but here's another suggestion:
#!/bin/bash
set_before=$( set -o posix; set | sed -e '/^_=*/d' )
# create/set some variables
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
set_after=$( set -o posix; unset set_before; set | sed -e '/^_=/d' )
diff <(echo "$set_before") <(echo "$set_after") | sed -e 's/^> //' -e '/^[[:digit:]].*/d'
The diff+sed pipeline command line outputs all script-defined variables in the desired format (as specified in the OP's post):
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
Here's something similar to the #GinkgoFr answer, but without the problems identified by #Tino or #DejayClayton,
and is more robust than #DouglasLeeder's clever set -o posix bit:
+ function SOLUTION() { (set +o posix; set) | sed -ne '/^\w\+=/!q; p;'; }
The difference is that this solution STOPS after the first non-variable report, e.g. the first function reported by set
BTW: The "Tino" problem is solved. Even though POSIX is turned off and functions are reported by set,
the sed ... portion of the solution only allows variable reports through (e.g. VAR=VALUE lines).
In particular, the A2 does not spuriously make it into the output.
+ function a() { echo $'\nA2=B'; }; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A9=999
AND: The "DejayClayton" problem is solved (embedded newlines in variable values do not disrupt the output - each VAR=VALUE get a single output line):
+ A1=$'111\nA2=222'; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A1=$'111\nA2=222'
A9=999
NOTE: The solution provided by #DouglasLeeder suffers from the "DejayClayton" problem (values with embedded newlines).
Below, the A1 is wrong and A2 should not show at all.
$ A1=$'111\nA2=222'; A0=000; A9=999; (set -o posix; set) | grep '^A[0-9]='
A0=000
A1='111
A2=222'
A9=999
FINALLY: I don't think the version of bash matters, but it might. I did my testing / developing on this one:
$ bash --version
GNU bash, version 4.4.12(1)-release (x86_64-pc-msys)
POST-SCRIPT: Given some of the other responses to the OP, I'm left < 100% sure that set always converts newlines within the value to \n, which this solution relies upon to avoid the "DejayClayton" problem. Perhaps that's a modern behavior? Or a compile-time variation? Or a set -o or shopt option setting? If you know of such variations, please add a comment...
If you're only concerned with printing a list of variables with static values (i.e. expansion doesn't work in this case) then another option would be to add start and end markers to your file that tell you where your block of static variable definitions is, e.g.
#!/bin/bash
# some code
# region variables
VAR1=FOO
VAR2=BAR
# endregion
# more code
Then you can just print that part of the file.
Here's something I whipped up for that:
function show_configuration() {
local START_LINE=$(( $(< "$0" grep -m 1 -n "region variables" | cut -d: -f1) + 1 ))
local END_LINE=$(( $(< "$0" grep -m 1 -n "endregion" | cut -d: -f1) - 1 ))
< "$0" awk "${START_LINE} <= NR && NR <= ${END_LINE}"
}
First, note that the block of variables resides in the same file this function is in, so I can use $0 to access the contents of the file.
I use "region" markers to separate different regions of code. So I simply grep for the "variable" region marker (first match: grep -m 1) and let grep prefix the line number (grep -n). Then I have to cut the line number from the match output (splitting on :). Lastly, add or subtract 1 because I don't want the markers to be part of the output.
Now, to print that range of the file I use awk with line number conditions.
Try using a script (lets call it "ls_vars"):
#!/bin/bash
set -a
env > /tmp/a
source $1
env > /tmp/b
diff /tmp/{a,b} | sed -ne 's/^> //p'
chmod +x it, and:
ls_vars your-script.sh > vars.files.save
From a security perspective, either #akostadinov's answer or #JuvenXu's answer is preferable to relying upon the unstructured output of the set command, due to the following potential security flaw:
#!/bin/bash
function doLogic()
{
local COMMAND="${1}"
if ( set -o posix; set | grep -q '^PS1=' )
then
echo 'Script is interactive'
else
echo 'Script is NOT interactive'
fi
}
doLogic 'hello' # Script is NOT interactive
doLogic $'\nPS1=' # Script is interactive
The above function doLogic uses set to check for the presence of variable PS1 to determine if the script is interactive or not (never mind if this is the best way to accomplish that goal; this is just an example.)
However, the output of set is unstructured, which means that any variable that contains a newline can totally contaminate the results.
This, of course, is a potential security risk. Instead, use either Bash's support for indirect variable name expansion, or compgen -v.
Try this : set | egrep "^\w+=" (with or without the | less piping)
The first proposed solution, ( set -o posix ; set ) | less, works but has a drawback: it transmits control codes to the terminal, so they are not displayed properly. So for example, if there is (likely) a IFS=$' \t\n' variable, we can see:
IFS='
'
…instead.
My egrep solution displays this (and eventually other similars ones) properly.
I probably have stolen the answer while ago ... anyway slightly different as a func:
##
# usage source bin/nps-bash-util-funcs
# doEchoVars
doEchoVars(){
# if the tmp dir does not exist
test -z ${tmp_dir} && \
export tmp_dir="$(cd "$(dirname $0)/../../.."; pwd)""/dat/log/.tmp.$$" && \
mkdir -p "$tmp_dir" && \
( set -o posix ; set )| sort >"$tmp_dir/.vars.before"
( set -o posix ; set ) | sort >"$tmp_dir/.vars.after"
cmd="$(comm -3 $tmp_dir/.vars.before $tmp_dir/.vars.after | perl -ne 's#\s+##g;print "\n $_ "' )"
echo -e "$cmd"
}
The printenv command:
printenv prints all environment variables along with their values.
Good Luck...
Simple way to do this is to use bash strict mode by setting system environment variables before running your script and to use diff to only sort the ones of your script :
# Add this line at the top of your script :
set > /tmp/old_vars.log
# Add this line at the end of your script :
set > /tmp/new_vars.log
# Alternatively you can remove unwanted variables with grep (e.g., passwords) :
set | grep -v "PASSWORD1=\|PASSWORD2=\|PASSWORD3=" > /tmp/new_vars.log
# Now you can compare to sort variables of your script :
diff /tmp/old_vars.log /tmp/new_vars.log | grep "^>" > /tmp/script_vars.log
You can now retrieve variables of your script in /tmp/script_vars.log.
Or at least something based on that!
TL;DR
With: typeset -m <GLOBPATH>
$ VARIABLE1=abc
$ VARIABLE2=def
$ VARIABLE3=ghi
$ noglob typeset -m VARIABLE*
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
¹ documentation for typeset can be found in man zshbuiltins, or man zshall.