Trigger list all completions behavior in a bash function - bash

I have this following handy little script that searches for aliases and bash functions. I'd like to extend it to bash autocomplete, i.e. find all the binaries on my PATH.
Come to think of it, the std autocomplete behavior would find the aliases and functions too. But if there is just a PATH binary list, that's good for me too.
i.e. how do I trigger the list-all-completions behavior in a bash function?
(venv) me#backups$ 👈 I entered a tab here
Display all 3093 possibilities? (y or n)
! libocijdbc12.dylib
./ libons.dylib
2to3 liboramysql12.dylib
2to3- libpng-config
.....
this is the bash script I currently use.
_getfilter(){
if [ -z "$1" ]; then
r_getfilter='.+'
else
r_getfilter="$1"
fi
}
findcommands(){
#hardcoding for now
#_getfilter $1
r_getfilter='^p'
printf "\nfunctions:\n"
declare -F | cut -c12- | egrep "$r_getfilter"
printf "\naliases:\n"
alias | cut -d '=' -f 1 | cut -d ' ' -f 2 | egrep "$r_getfilter"
printf "\nautocomplete:\n"
<what do I use here?> | egrep "$r_getfilter"
}
I would prefer not having to trawl individual directories in PATH for me-permissible executables. If that's the only solution, I would probably not bother.
the current findcommands implementation works on Linux and Mac. My priority is getting it to work on my Mac, Linux too is a nice-to-have.
Typing complete and compgen, which seem related, shows no output. and man has nothing much to say for either.

You're looking for compgen.
$ compgen -c
ffmpeg
la
ll
man
if
then
else
elif
fi
case
...

Related

Delete duplicate commands of zsh_history keeping last occurence

I'm trying to write a shell script that deletes duplicate commands from my zsh_history file. Having no real shell script experience and given my C background I wrote this monstrosity that seems to work (only on Mac though), but takes a couple of lifetimes to end:
#!/bin/sh
history=./.zsh_history
currentLines=$(grep -c '^' $history)
wordToBeSearched=""
currentWord=""
contrastor=0
searchdex=""
echo "Currently handling a grand total of: $currentLines lines. Please stand by..."
while (( $currentLines - $contrastor > 0 ))
do
searchdex=1
wordToBeSearched=$(awk "NR==$currentLines - $contrastor" $history | cut -d ";" -f 2)
echo "$wordToBeSearched A BUSCAR"
while (( $currentLines - $contrastor - $searchdex > 0 ))
do
currentWord=$(awk "NR==$currentLines - $contrastor - $searchdex" $history | cut -d ";" -f 2)
echo $currentWord
if test "$currentWord" == "$wordToBeSearched"
then
sed -i .bak "$((currentLines - $contrastor - $searchdex)) d" $history
currentLines=$(grep -c '^' $history)
echo "Line deleted. New number of lines: $currentLines"
let "searchdex--"
fi
let "searchdex++"
done
let "contrastor++"
done
^THIS IS HORRIBLE CODE NOONE SHOULD USE^
I'm now looking for a less life-consuming approach using more shell-like conventions, mainly sed at this point. Thing is, zsh_history stores commands in a very specific way:
: 1652789298:0;man sed
Where the command itself is always preceded by ":0;".
I'd like to find a way to delete duplicate commands while keeping the last occurrence of each command intact and in order.
Currently I'm at a point where I have a functional line that will delete strange lines that find their way into the file (newlines and such):
#sed -i '/^:/!d' $history
But that's about it. Not really sure how get the expression to look for into a sed without falling back into everlasting whiles or how to delete the duplicates while keeping the last-occurring command.
The zsh option hist_ignore_all_dups should do what you want. Just add setopt hist_ignore_all_dups to your zshrc.
I wanted something similar, but I dont care about preserving the last one as you mentioned. This is just finding duplicates and removing them.
I used this command and then removed my .zsh_history and replacing it with the .zhistory that this command outputs
So from your home folder:
cat -n .zsh_history | sort -t ';' -uk2 | sort -nk1 | cut -f2- > .zhistory
This effectively will give you the file .zhistory containing the changed list, in my case it went from 9000 lines to 3000, you can check it with wc -l .zhistory to count the number of lines it has.
Please double check and make a backup of your zsh history before doing anything with it.
The sort command might be able to be modified to sort it by numerical value and somehow archieve what you want, but you will have to investigate further about that.
I found the script here, along with some commands to avoid saving duplicates in the future
I didn't want to rename the history file.
# dedupe_lines.zsh
if [ $# -eq 0 ]; then
echo "Error: No file specified" >&2
exit 1
fi
if [ ! -f $1 ]; then
echo "Error: File not found" >&2
exit 1
fi
sort $1 | uniq >temp.txt
mv temp.txt $1
Add dedupe_lines.zsh to your home directory, then make it executable.
chmod +x dedupe_lines.zsh
Run it.
./dedupe_lines.zsh .zsh_history

Print all variables used within a bash script [duplicate]

In my script in bash, there are lot of variables, and I have to make something to save them to file.
My question is how to list all variables declared in my script and get list like this:
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
set will output the variables, unfortunately it will also output the functions defines as well.
Luckily POSIX mode only outputs the variables:
( set -o posix ; set ) | less
Piping to less, or redirect to where you want the options.
So to get the variables declared in just the script:
( set -o posix ; set ) >/tmp/variables.before
source script
( set -o posix ; set ) >/tmp/variables.after
diff /tmp/variables.before /tmp/variables.after
rm /tmp/variables.before /tmp/variables.after
(Or at least something based on that :-) )
compgen -v
It lists all variables including local ones.
I learned it from Get list of variables whose name matches a certain pattern, and used it in my script.
for i in _ {a..z} {A..Z}; do eval "echo \${!$i#}" ; done | xargs printf "%s\n"
This must print all shell variables names. You can get a list before and after sourcing your file just like with "set" to diff which variables are new (as explained in the other answers). But keep in mind such filtering with diff can filter out some variables that you need but were present before sourcing your file.
In your case, if you know your variables' names start with "VARIABLE", then you can source your script and do:
for var in ${!VARIABLE#}; do
printf "%s%q\n" "$var=" "${!var}"
done
UPDATE: For pure BASH solution (no external commands used):
for i in _ {a..z} {A..Z}; do
for var in `eval echo "\\${!$i#}"`; do
echo $var
# you can test if $var matches some criteria and put it in the file or ignore
done
done
Based on some of the above answers, this worked for me:
before=$(set -o posix; set | sort);
source file:
comm -13 <(printf %s "$before") <(set -o posix; set | sort | uniq)
If you can post-process, (as already mentioned) you might just place a set call at the beginning and end of your script (each to a different file) and do a diff on the two files. Realize that this will still contain some noise.
You can also do this programatically. To limit the output to just your current scope, you would have to implement a wrapper to variable creation. For example
store() {
export ${1}="${*:2}"
[[ ${STORED} =~ "(^| )${1}($| )" ]] || STORED="${STORED} ${1}"
}
store VAR1 abc
store VAR2 bcd
store VAR3 cde
for i in ${STORED}; do
echo "${i}=${!i}"
done
Which yields
VAR1=abc
VAR2=bcd
VAR3=cde
A little late to the party, but here's another suggestion:
#!/bin/bash
set_before=$( set -o posix; set | sed -e '/^_=*/d' )
# create/set some variables
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
set_after=$( set -o posix; unset set_before; set | sed -e '/^_=/d' )
diff <(echo "$set_before") <(echo "$set_after") | sed -e 's/^> //' -e '/^[[:digit:]].*/d'
The diff+sed pipeline command line outputs all script-defined variables in the desired format (as specified in the OP's post):
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
Here's something similar to the #GinkgoFr answer, but without the problems identified by #Tino or #DejayClayton,
and is more robust than #DouglasLeeder's clever set -o posix bit:
+ function SOLUTION() { (set +o posix; set) | sed -ne '/^\w\+=/!q; p;'; }
The difference is that this solution STOPS after the first non-variable report, e.g. the first function reported by set
BTW: The "Tino" problem is solved. Even though POSIX is turned off and functions are reported by set,
the sed ... portion of the solution only allows variable reports through (e.g. VAR=VALUE lines).
In particular, the A2 does not spuriously make it into the output.
+ function a() { echo $'\nA2=B'; }; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A9=999
AND: The "DejayClayton" problem is solved (embedded newlines in variable values do not disrupt the output - each VAR=VALUE get a single output line):
+ A1=$'111\nA2=222'; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A1=$'111\nA2=222'
A9=999
NOTE: The solution provided by #DouglasLeeder suffers from the "DejayClayton" problem (values with embedded newlines).
Below, the A1 is wrong and A2 should not show at all.
$ A1=$'111\nA2=222'; A0=000; A9=999; (set -o posix; set) | grep '^A[0-9]='
A0=000
A1='111
A2=222'
A9=999
FINALLY: I don't think the version of bash matters, but it might. I did my testing / developing on this one:
$ bash --version
GNU bash, version 4.4.12(1)-release (x86_64-pc-msys)
POST-SCRIPT: Given some of the other responses to the OP, I'm left < 100% sure that set always converts newlines within the value to \n, which this solution relies upon to avoid the "DejayClayton" problem. Perhaps that's a modern behavior? Or a compile-time variation? Or a set -o or shopt option setting? If you know of such variations, please add a comment...
If you're only concerned with printing a list of variables with static values (i.e. expansion doesn't work in this case) then another option would be to add start and end markers to your file that tell you where your block of static variable definitions is, e.g.
#!/bin/bash
# some code
# region variables
VAR1=FOO
VAR2=BAR
# endregion
# more code
Then you can just print that part of the file.
Here's something I whipped up for that:
function show_configuration() {
local START_LINE=$(( $(< "$0" grep -m 1 -n "region variables" | cut -d: -f1) + 1 ))
local END_LINE=$(( $(< "$0" grep -m 1 -n "endregion" | cut -d: -f1) - 1 ))
< "$0" awk "${START_LINE} <= NR && NR <= ${END_LINE}"
}
First, note that the block of variables resides in the same file this function is in, so I can use $0 to access the contents of the file.
I use "region" markers to separate different regions of code. So I simply grep for the "variable" region marker (first match: grep -m 1) and let grep prefix the line number (grep -n). Then I have to cut the line number from the match output (splitting on :). Lastly, add or subtract 1 because I don't want the markers to be part of the output.
Now, to print that range of the file I use awk with line number conditions.
Try using a script (lets call it "ls_vars"):
#!/bin/bash
set -a
env > /tmp/a
source $1
env > /tmp/b
diff /tmp/{a,b} | sed -ne 's/^> //p'
chmod +x it, and:
ls_vars your-script.sh > vars.files.save
From a security perspective, either #akostadinov's answer or #JuvenXu's answer is preferable to relying upon the unstructured output of the set command, due to the following potential security flaw:
#!/bin/bash
function doLogic()
{
local COMMAND="${1}"
if ( set -o posix; set | grep -q '^PS1=' )
then
echo 'Script is interactive'
else
echo 'Script is NOT interactive'
fi
}
doLogic 'hello' # Script is NOT interactive
doLogic $'\nPS1=' # Script is interactive
The above function doLogic uses set to check for the presence of variable PS1 to determine if the script is interactive or not (never mind if this is the best way to accomplish that goal; this is just an example.)
However, the output of set is unstructured, which means that any variable that contains a newline can totally contaminate the results.
This, of course, is a potential security risk. Instead, use either Bash's support for indirect variable name expansion, or compgen -v.
Try this : set | egrep "^\w+=" (with or without the | less piping)
The first proposed solution, ( set -o posix ; set ) | less, works but has a drawback: it transmits control codes to the terminal, so they are not displayed properly. So for example, if there is (likely) a IFS=$' \t\n' variable, we can see:
IFS='
'
…instead.
My egrep solution displays this (and eventually other similars ones) properly.
I probably have stolen the answer while ago ... anyway slightly different as a func:
##
# usage source bin/nps-bash-util-funcs
# doEchoVars
doEchoVars(){
# if the tmp dir does not exist
test -z ${tmp_dir} && \
export tmp_dir="$(cd "$(dirname $0)/../../.."; pwd)""/dat/log/.tmp.$$" && \
mkdir -p "$tmp_dir" && \
( set -o posix ; set )| sort >"$tmp_dir/.vars.before"
( set -o posix ; set ) | sort >"$tmp_dir/.vars.after"
cmd="$(comm -3 $tmp_dir/.vars.before $tmp_dir/.vars.after | perl -ne 's#\s+##g;print "\n $_ "' )"
echo -e "$cmd"
}
The printenv command:
printenv prints all environment variables along with their values.
Good Luck...
Simple way to do this is to use bash strict mode by setting system environment variables before running your script and to use diff to only sort the ones of your script :
# Add this line at the top of your script :
set > /tmp/old_vars.log
# Add this line at the end of your script :
set > /tmp/new_vars.log
# Alternatively you can remove unwanted variables with grep (e.g., passwords) :
set | grep -v "PASSWORD1=\|PASSWORD2=\|PASSWORD3=" > /tmp/new_vars.log
# Now you can compare to sort variables of your script :
diff /tmp/old_vars.log /tmp/new_vars.log | grep "^>" > /tmp/script_vars.log
You can now retrieve variables of your script in /tmp/script_vars.log.
Or at least something based on that!
TL;DR
With: typeset -m <GLOBPATH>
$ VARIABLE1=abc
$ VARIABLE2=def
$ VARIABLE3=ghi
$ noglob typeset -m VARIABLE*
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
¹ documentation for typeset can be found in man zshbuiltins, or man zshall.

Would a "shell function" or "alias" be appropriate for this use

I'm currently trying to create an alias or shell function which I can run to check my battery life, in attempts to familiarize myself with aliases and bash. I have run into a problem where, I'm not receiving any feedback from my command and can not verify if it's working or if there are any steps i have left out that will give me my desired result.
Current .bashrc alias:
alias battery='upower -i $(upower -e | grep -e 'BAT'| grep -E "state|to\ full|percentage")'
Desired use:
b#localhost:~$ battery
Desired result:
state: discharging Time to empty: x.x Hours percentage: xx%
I have read the bash references for something that might help me here. I wasn't able to find anything that I think applies here. Thanks for your consideration!
As #bannji already announced in a comment, he has fixed his command.
Old incorrect alias
'upower -i $(upower -e | grep -e 'BAT'| grep -E "state|to\ full|percentage")'
New correct alias
'upower -i $(upower -e | grep -e "BAT") | grep -E "state|to\ full|percentage"'
Most comments were talking about the interpretation of the quotes. That was not the problem here. The main difference is where the subcommand is closed. In the first case the subcommand is closed after the last grep, su upower -i gets nothing.
In the second command the second grep will filter the output of upower -i.
The difference in quotes is interesting in an other example.
addone() {
((sum=$1+1))
echo "${sum}"
}
i=1
alias battery='addone $(addone $i)'
i=4
battery
# other alias
i=1
alias battery2='addone $(addone '$i')'
i=4
battery2
Both battery commands will try to add 2 to the value of $i, but will give different results.
The command battery will add 2 to the current value 4 of $i, resulting in 6.
The command battery2 will add 2 to the value of $i at the moment that the alias was defined, resulting in 3.
Why?
In battery2 the string $i is surrounded by single quotes, but those single quotes are inside other ones. The result is that $i is evaluated and the alias is defined as
alias battery2='addone $(addone 2)'

Prevent a command being executed from a source'd file in Bash

For security purposes, how can I prevent a command being executed in a file that is source'd?
For example:
#!/bin/sh
source file.cfg
Wanted:
get="value"
Unintended:
command
You could use a mechanism like in Python. Define variables and/or functions and put executable commands into a conditional block:
#!/bin/bash
# Variables and functions comes here
a=1
b=2
function foo() {
echo "bar"
}
# Put executable commands here
if [ "$0" = "$BASH_SOURCE" ] ; then
foo
fi
If you chmod +x the file and run it or run it through bash file.sh the executable commands in the conditional statement will get executed. If you source the file only variables and functions will get imported.
Long story short, you can't. We could debate how to try to prevent some commands from being executed but if security is the major concern here, source is a no-go. You are looking for a proper configuration facility — while source is intended to execute code.
For example, the following code provides a trivial key-value configuration file parsing:
while read -r x; do
declare +x -- "${x}"
done < file.cfg
But this is both far from the flexibility source gives you, and it is far from perfectly secure solution either. It doesn't handle any specific escaping, multi-line variables, comments… and it also doesn't filter the assigned variables, so the config can override your precious variables. The extra +x argument to declare ensures that the config file at least won't modify environment exported to programs.
If you really want to go this route, you can try to improve this. But if you are really worried about security, you should think twice before using shell script at all. Writing proper shell script is not trivial at all, and it is full of pitfalls.
Something basic, might work:
name="$(sed -n 1p < source_file | grep -o 'name="[^"]*' | grep -o '[^"]*$')"
lastname="$(sed -n 2p < source_file | grep -o 'lastname="[^"]*' | grep -o '[^"]*$')"
age="$(sed -n 3p < source_file | grep -o 'age="[^"]*' | grep -o '[^"]*$')"
Next, check the parameters if they meet certain standards for example if it matches a name of a database ($LIST_NAMES) or if you have a certain amount of character string, ect.
if ! grep -Fox "$name" <<<"$LIST_NAMES"; then exit 1; fi
if [ $(wc -c <<<"$age") -gt 3 ]; then exit 1; fi
then taken only the lines useful to prevent the rest.
head -n3 < source_file > source_file.tmp
source 'source_file.tmp'

How to list variables declared in script in bash?

In my script in bash, there are lot of variables, and I have to make something to save them to file.
My question is how to list all variables declared in my script and get list like this:
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
set will output the variables, unfortunately it will also output the functions defines as well.
Luckily POSIX mode only outputs the variables:
( set -o posix ; set ) | less
Piping to less, or redirect to where you want the options.
So to get the variables declared in just the script:
( set -o posix ; set ) >/tmp/variables.before
source script
( set -o posix ; set ) >/tmp/variables.after
diff /tmp/variables.before /tmp/variables.after
rm /tmp/variables.before /tmp/variables.after
(Or at least something based on that :-) )
compgen -v
It lists all variables including local ones.
I learned it from Get list of variables whose name matches a certain pattern, and used it in my script.
for i in _ {a..z} {A..Z}; do eval "echo \${!$i#}" ; done | xargs printf "%s\n"
This must print all shell variables names. You can get a list before and after sourcing your file just like with "set" to diff which variables are new (as explained in the other answers). But keep in mind such filtering with diff can filter out some variables that you need but were present before sourcing your file.
In your case, if you know your variables' names start with "VARIABLE", then you can source your script and do:
for var in ${!VARIABLE#}; do
printf "%s%q\n" "$var=" "${!var}"
done
UPDATE: For pure BASH solution (no external commands used):
for i in _ {a..z} {A..Z}; do
for var in `eval echo "\\${!$i#}"`; do
echo $var
# you can test if $var matches some criteria and put it in the file or ignore
done
done
Based on some of the above answers, this worked for me:
before=$(set -o posix; set | sort);
source file:
comm -13 <(printf %s "$before") <(set -o posix; set | sort | uniq)
If you can post-process, (as already mentioned) you might just place a set call at the beginning and end of your script (each to a different file) and do a diff on the two files. Realize that this will still contain some noise.
You can also do this programatically. To limit the output to just your current scope, you would have to implement a wrapper to variable creation. For example
store() {
export ${1}="${*:2}"
[[ ${STORED} =~ "(^| )${1}($| )" ]] || STORED="${STORED} ${1}"
}
store VAR1 abc
store VAR2 bcd
store VAR3 cde
for i in ${STORED}; do
echo "${i}=${!i}"
done
Which yields
VAR1=abc
VAR2=bcd
VAR3=cde
A little late to the party, but here's another suggestion:
#!/bin/bash
set_before=$( set -o posix; set | sed -e '/^_=*/d' )
# create/set some variables
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
set_after=$( set -o posix; unset set_before; set | sed -e '/^_=/d' )
diff <(echo "$set_before") <(echo "$set_after") | sed -e 's/^> //' -e '/^[[:digit:]].*/d'
The diff+sed pipeline command line outputs all script-defined variables in the desired format (as specified in the OP's post):
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
Here's something similar to the #GinkgoFr answer, but without the problems identified by #Tino or #DejayClayton,
and is more robust than #DouglasLeeder's clever set -o posix bit:
+ function SOLUTION() { (set +o posix; set) | sed -ne '/^\w\+=/!q; p;'; }
The difference is that this solution STOPS after the first non-variable report, e.g. the first function reported by set
BTW: The "Tino" problem is solved. Even though POSIX is turned off and functions are reported by set,
the sed ... portion of the solution only allows variable reports through (e.g. VAR=VALUE lines).
In particular, the A2 does not spuriously make it into the output.
+ function a() { echo $'\nA2=B'; }; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A9=999
AND: The "DejayClayton" problem is solved (embedded newlines in variable values do not disrupt the output - each VAR=VALUE get a single output line):
+ A1=$'111\nA2=222'; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A1=$'111\nA2=222'
A9=999
NOTE: The solution provided by #DouglasLeeder suffers from the "DejayClayton" problem (values with embedded newlines).
Below, the A1 is wrong and A2 should not show at all.
$ A1=$'111\nA2=222'; A0=000; A9=999; (set -o posix; set) | grep '^A[0-9]='
A0=000
A1='111
A2=222'
A9=999
FINALLY: I don't think the version of bash matters, but it might. I did my testing / developing on this one:
$ bash --version
GNU bash, version 4.4.12(1)-release (x86_64-pc-msys)
POST-SCRIPT: Given some of the other responses to the OP, I'm left < 100% sure that set always converts newlines within the value to \n, which this solution relies upon to avoid the "DejayClayton" problem. Perhaps that's a modern behavior? Or a compile-time variation? Or a set -o or shopt option setting? If you know of such variations, please add a comment...
If you're only concerned with printing a list of variables with static values (i.e. expansion doesn't work in this case) then another option would be to add start and end markers to your file that tell you where your block of static variable definitions is, e.g.
#!/bin/bash
# some code
# region variables
VAR1=FOO
VAR2=BAR
# endregion
# more code
Then you can just print that part of the file.
Here's something I whipped up for that:
function show_configuration() {
local START_LINE=$(( $(< "$0" grep -m 1 -n "region variables" | cut -d: -f1) + 1 ))
local END_LINE=$(( $(< "$0" grep -m 1 -n "endregion" | cut -d: -f1) - 1 ))
< "$0" awk "${START_LINE} <= NR && NR <= ${END_LINE}"
}
First, note that the block of variables resides in the same file this function is in, so I can use $0 to access the contents of the file.
I use "region" markers to separate different regions of code. So I simply grep for the "variable" region marker (first match: grep -m 1) and let grep prefix the line number (grep -n). Then I have to cut the line number from the match output (splitting on :). Lastly, add or subtract 1 because I don't want the markers to be part of the output.
Now, to print that range of the file I use awk with line number conditions.
Try using a script (lets call it "ls_vars"):
#!/bin/bash
set -a
env > /tmp/a
source $1
env > /tmp/b
diff /tmp/{a,b} | sed -ne 's/^> //p'
chmod +x it, and:
ls_vars your-script.sh > vars.files.save
From a security perspective, either #akostadinov's answer or #JuvenXu's answer is preferable to relying upon the unstructured output of the set command, due to the following potential security flaw:
#!/bin/bash
function doLogic()
{
local COMMAND="${1}"
if ( set -o posix; set | grep -q '^PS1=' )
then
echo 'Script is interactive'
else
echo 'Script is NOT interactive'
fi
}
doLogic 'hello' # Script is NOT interactive
doLogic $'\nPS1=' # Script is interactive
The above function doLogic uses set to check for the presence of variable PS1 to determine if the script is interactive or not (never mind if this is the best way to accomplish that goal; this is just an example.)
However, the output of set is unstructured, which means that any variable that contains a newline can totally contaminate the results.
This, of course, is a potential security risk. Instead, use either Bash's support for indirect variable name expansion, or compgen -v.
Try this : set | egrep "^\w+=" (with or without the | less piping)
The first proposed solution, ( set -o posix ; set ) | less, works but has a drawback: it transmits control codes to the terminal, so they are not displayed properly. So for example, if there is (likely) a IFS=$' \t\n' variable, we can see:
IFS='
'
…instead.
My egrep solution displays this (and eventually other similars ones) properly.
I probably have stolen the answer while ago ... anyway slightly different as a func:
##
# usage source bin/nps-bash-util-funcs
# doEchoVars
doEchoVars(){
# if the tmp dir does not exist
test -z ${tmp_dir} && \
export tmp_dir="$(cd "$(dirname $0)/../../.."; pwd)""/dat/log/.tmp.$$" && \
mkdir -p "$tmp_dir" && \
( set -o posix ; set )| sort >"$tmp_dir/.vars.before"
( set -o posix ; set ) | sort >"$tmp_dir/.vars.after"
cmd="$(comm -3 $tmp_dir/.vars.before $tmp_dir/.vars.after | perl -ne 's#\s+##g;print "\n $_ "' )"
echo -e "$cmd"
}
The printenv command:
printenv prints all environment variables along with their values.
Good Luck...
Simple way to do this is to use bash strict mode by setting system environment variables before running your script and to use diff to only sort the ones of your script :
# Add this line at the top of your script :
set > /tmp/old_vars.log
# Add this line at the end of your script :
set > /tmp/new_vars.log
# Alternatively you can remove unwanted variables with grep (e.g., passwords) :
set | grep -v "PASSWORD1=\|PASSWORD2=\|PASSWORD3=" > /tmp/new_vars.log
# Now you can compare to sort variables of your script :
diff /tmp/old_vars.log /tmp/new_vars.log | grep "^>" > /tmp/script_vars.log
You can now retrieve variables of your script in /tmp/script_vars.log.
Or at least something based on that!
TL;DR
With: typeset -m <GLOBPATH>
$ VARIABLE1=abc
$ VARIABLE2=def
$ VARIABLE3=ghi
$ noglob typeset -m VARIABLE*
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
¹ documentation for typeset can be found in man zshbuiltins, or man zshall.

Resources