How to export variables from Bash into tcl? - bash

I have variables A, B, and C.
I have written
export A=$A
export B=$B
export C=$C
Not sure how to carry the variables over into a tcl script. What I currently have written in the tcl script is
puts "$A == $::env(A)"
puts "$B == $::env(B)"
puts "$C == $::env(C)"
But that doesn't work. I have tried with and without the first $.

Either way is meant to work:
$ export A="a"
$ echo "puts $::env(A)" | tclsh
a
or
$ export B="b"; echo "puts $::env(B)" | tclsh
b
or
$ echo "puts $::env(C)" | C="c" tclsh
c

There are two ways. Either you export the variable (there are a few ways to do that) or you assign it as part of the call to tclsh. Using export:
export B="b"
echo 'puts "the B environment variable is $::env(B)"' | tclsh
B="b"
export B
echo 'puts "the B environment variable is $::env(B)"' | tclsh
Assigning as part of the call (NB: no semicolons and the variable assignment is close to the actual call to tclsh):
echo 'puts "the B environment variable is $::env(B)"' | B="b" tclsh
For anything complex or large, try to avoid passing it via environment variables (or command line arguments). Using files works better in those cases. For anything secret, DO NOT use either command line arguments or environment variables as neither is a secure communication mechanism, but files (with appropriate permissions, including on the containing directory) are sufficiently secure.

Related

How can I save environment variables in a file using BASH? [duplicate]

I have two shell scripts that I'd like to invoke from a C program. I would like shell variables set in the first script to be visible in the second. Here's what it would look like:
a.sh:
var=blah
<save vars>
b.sh:
<restore vars>
echo $var
The best I've come up with so far is a variant on "set > /tmp/vars" to save the variables and "eval $(cat /tmp/vars)" to restore them. The "eval" chokes when it tries to restore a read-only variable, so I need to grep those out. A list of these variables is available via "declare -r". But there are some vars which don't show up in this list, yet still can't be set in eval, e.g. BASH_ARGC. So I need to grep those out, too.
At this point, my solution feels very brittle and error-prone, and I'm not sure how portable it is. Is there a better way to do this?
One way to avoid setting problematic variables is by storing only those which have changed during the execution of each script. For example,
a.sh:
set > /tmp/pre
foo=bar
set > /tmp/post
grep -v -F -f/tmp/pre /tmp/post > /tmp/vars
b.sh:
eval $(cat /tmp/vars)
echo $foo
/tmp/vars contains this:
PIPESTATUS=([0]="0")
_=
foo=bar
Evidently evaling the first two lines has no adverse effect.
If you can use a common prefix on your variable names, here is one way to do it:
# save the variables
yourprefix_width=1200
yourprefix_height=2150
yourprefix_length=1975
yourprefix_material=gravel
yourprefix_customer_array=("Acme Plumbing" "123 Main" "Anytown")
declare -p $(echo ${!yourprefix#}) > varfile
# load the variables
while read -r line
do
if [[ $line == declare\ * ]]
then
eval "$line"
fi
done < varfile
Of course, your prefix will be shorter. You could do further validation upon loading the variables to make sure that the variable names conform to your naming scheme.
The advantage of using declare is that it is more secure than just using eval by itself.
If you need to, you can filter out variables that are marked as readonly or select variables that are marked for export.
Other commands of interest (some may vary by Bash version):
export - without arguments, lists all exported variables using a declare format
declare -px - same as the previous command
declare -pr - lists readonly variables
If it's possible for a.sh to call b.sh, it will carry over if they're exported. Or having a parent set all the values necessary and then call both. That's the most secure and sure method I can think of.
Not sure if it's accepted dogma, but:
bash -c 'export foo=bar; env > xxxx'
env `cat xxxx` otherscript.sh
The otherscript will have the env printed to xxxx ...
Update:
Also note:
man execle
On how to set environment variables for another system call from within C, if you need to do that. And:
man getenv
and http://www.crasseux.com/books/ctutorial/Environment-variables.html
An alternative to saving and restoring shell state would be to make the C program and the shell program work in parallel: the C program starts the shell program, which runs a.sh, then notifies the C program (perhaps passing some information it's learned from executing a.sh), and when the C program is ready for more it tells the shell program to run b.sh. The shell program would look like this:
. a.sh
echo "information gleaned from a"
arguments_for_b=$(read -r)
. b.sh
And the general structure of the C program would be:
set up two pairs of pipes, one for C->shell and one for shell->C
fork, exec the shell wrapper
read information gleaned from a on the shell->C pipe
more processing
write arguments for b on the C->shell pipe
wait for child process to end
I went looking for something similar and couldn't find it either, so I made the two scripts below. To start, just say shellstate, then probably at least set -i and set -o emacs which this reset_shellstate doesn't do for you. I don't know a way to ask bash which variables it thinks are special.
~/bin/reset_shellstate:
#!/bin/bash
__="$PWD/shellstate_${1#_}"
trap '
declare -p >"'"$__"'"
trap >>"'"$__"'"
echo cd \""$PWD"\" >>"'"$__"'" # setting PWD did this already, but...
echo set +abefhikmnptuvxBCEHPT >>"'"$__"'"
echo set -$- >>"'"$__"'" # must be last before sed, see $s/s//2 below
sed -ri '\''
$s/s//2
s,^trap --,trap,
/^declare -[^ ]*r/d
/^declare -[^ ]* [A-Za-z0-9_]*[^A-Za-z0-9_=]/d
/^declare -[^ ]* [^= ]*_SESSION_/d
/^declare -[^ ]* BASH[=_]/d
/^declare -[^ ]* (DISPLAY|GROUPS|SHLVL|XAUTHORITY)=/d
/^declare -[^ ]* WINDOW(ID|PATH)=/d
'\'' "'"$__"'"
shopt -op >>"'"$__"'"
shopt -p >>"'"$__"'"
declare -f >>"'"$__"'"
echo "Shell state saved in '"$__"'"
' 0
unset __
~/bin/shellstate:
#!/bin/bash
shellstate=shellstate_${1#_}
test -s $shellstate || reset_shellstate $1
shift
bash --noprofile --init-file shellstate_${1#_} -is "$#"
exit $?

bash script: -Xmx16M: command not found

I have a common run-class.sh file defined as follows:
#!/bin/bash
if [ -z "$MAIN_CLASS" ] ; then
echo "Do not run this script on its own. It's intended to be included in other commands."
exit 1
fi
JAVA_ARGS=-client -Xmx16M
export JAVA_ARGS
DIR=`dirname "$0"`
# set jars
JARS=
for JAR in $DIR/../lib/*.jar; do JARS=$JAR:$JARS; done
# set java classpath and export
CLASSPATH=$DIR/../conf/:$DIR/../conf/*:$JARS
export CLASSPATH
java $JAVA_ARGS $MAIN_CLASS "$#"
and another test-class.sh script as follows to invoke a java class:
#!/bin/bash
MAIN_CLASS="com.my.package.TestClass"
. run-class.sh
When I run the test-class.sh file as follows:
>./test-class.sh
I get a console message saying:
run-class.sh: line 8: -Xmx16M: command not found
I'm not sure why this is incorrect when I'm already exporting the JAVA_ARGS.
Use quotes with JAVA_ARGS assignment :
JAVA_ARGS="-client -Xmx16M"
I find using bash arrays tends to make things more robust:
#!/bin/bash
if [ -z "$MAIN_CLASS" ] ; then
echo "Do not run this script on its own. It's intended to be included in other commands."
exit 1
fi
# use an array
java_args=(-client -Xmx16M)
dir=$(dirname "$0")
# set java classpath and export
cp=(
"$dir"/../.conf/
"$dir"/../.conf/"*" # I assume you want a literal star here
"$dir"/../lib/*.jar
)
export CLASSPATH=$( IFS=":"; echo "${cp[*]}" )
java "${java_args[#]}" "$MAIN_CLASS" "$#"
Other notes:
don't use ALL_CAPS variable names, except for environment variables.
read http://mywiki.wooledge.org/BashFAQ/050
You can set variables that are localized to a command.
Most everyone knows simple environment vars, set like this -
$: x=foo
$: echo $x
foo
But you can set a local override.
$: x=bar eval 'echo $x' # <<--- uses echo's local x
bar
$: echo $x
foo
(Just don't be fooled by a false test...
$: x=bar echo $x # $x parsed BEFORE passing to echo
foo
...which will confuse you if you don't realize echo received the value when the line was parsed, so didn't see the change.)
So, by saying
JAVA_ARGS=-client -Xmx16M
without quotes, the command interpreter is assuming this is what you are doing, and failing because -Xmx16M isn't found. By putting quotes around it you make the entire value part of the assignment.
JAVA_ARGS='-client -Xmx16M'
This will do what you wanted.

how to import my unix variable inside my perl command..?

I am running below command on Bash prompt:
bash-3.2$ x=12
bash-3.2$ echo $x
12
bash-3.2$ perl -e '$age=$x; print "Age = $age\n"'
Age =
bash-3.2$
I am not getting the age/number printed..! How shall i import my unix bash variable inside my perl command..!?
First you have to export x in the shell. Then you can access the variable from Perl as $ENV{x}.
$ x=12
$ export x
$ perl -e '$age=$ENV{x}; print "Age = $age\n"'
Age = 12
This answer on this thread accesses the variable directly through the environment, which seems like a neater way.
Still, to demonstrate a way to use direct shell substitution (which has it's uses), then the right way to do it would be like this:
perl -e '$age='"$x"'; print "Age = $age\n"'
perl sees this as it's input: perl -e $age=12; print "Age = $age\n"
The single quotes for the -e parameter prevents the shell variable from being expanded. Use $ENV{'myvar'} to get the value of shell variable $myvar
And export the variable too as noted by the other answer.

How do I get command-line Perl to accept shell variables?

I can do math like
perl -e 'print 5253413/39151' -l
But I don't quite get how to take advantage of Perl's ability to do math with my own predefined bash variables. I've tried
var1=$(some wc command that yields a number); var1=$(some wc that yields another number)
perl -e 'print var1/var2' -l
But it doesn't work
There are two main ways to do this.
Within the Perl code you can use the %ENV built-in hash to access environment variables that are exported from the shell
$ export var1=5253413
$ export var2=39151
$ perl -E 'say $ENV{var1}/$ENV{var2}'
134.183366963807
You can use the shell interpolation facility to insert the value of a shell variable into a command
This is best done as parameters to the perl one-liner rather than introducing the values directly into the code
$ var1=5253413
$ var2=39151
$ perl -E '($v1, $v2) = #ARGV; say $v1/$v2' $var1 $var2
134.183366963807
Two less common ways to do this make use of long-standing perl features.
The first is the core module Env, which ties process environment variables to perl variables:
sh$ export VAR1=1000
sh$ export VAR2=33
sh$ perl -MEnv -E 'say $VAR1/$VAR2' # imports all environ vars
333.333333333333
sh$ perl -MEnv=VAR1,VAR2 -E 'say $VAR1/$VAR2' # imports only VAR1, VAR2
333.333333333333
Note that the variables need to be present in the environment inherited by the perl process, for example with export VAR as above, or explicitly for a single command (as by FOO=hello perl -MEnv -E 'say $FOO').
The second and rather more obscure way is to use use perl's -s switch to set arbitrary variables from the command line:
sh$ VAR1=1000
sh$ VAR2=33
sh$ perl -s -E 'say $dividend/$divisor' -- -dividend=$VAR1 -divisor=$VAR2
333.333333333333
awk does something similar with its -v switch.
I believe the spirit of the question is to pass variables without exported ENV vars.
Beside using perl -s -e expression -perlvar=val, below is code that uses two other mechanisms to pass the variable to perl.
a=x; b=N; c=z;
b=y perl -e '$pa='$a';' -e "\$pc=$c;" -e 'print "$pa$ENV{b}$pc\n";'
echo $a$b$c
Passing a and c is same, only the quoting is different. When passing using chained expressions, like this, it is important to end the expression with semi-colon; because, they flow into one expression at the end.
Passing b is done by ENV, but instead of using the exported value, it is passed directly into perl's ENV by giving the assignment before the command on the same command-line.
Last the echo command is to emphasize how the shell's definition of $b is unchanged.
Using the mechanism of b's passing, we arrive at a more secure solution, because the process's ENV data cannot be checked for the value, and it will not be seen in the command-line argument list.

How to run "source" command (Linux) from a perl script?

I am trying to source a script from a Perl script (script.pl).
system ("source /some/generic/script");
Please note that this generic script could be a shell, python or any other script. Also, I cannot replicate the logic present inside this generic script into my Perl script. I tried replacing system with ``, exec, and qx//. Each time I got the following error:
Can't exec "source": No such file or directory at script.pl line 18.
I came across many forums on the internet, which discussed various reasons for this problem. But none of them provided a solution. Is there any way to run/execute source command from a Perl script?
In bash, etc, source is a builtin that means read this file, and interpret it locally (a little like a #include).
In this context that makes no sense - you either need to remove source from the command and have a shebang (#!) line at the start of the shell script that tells the system which shell to use to execute that script, or you need to explicitly tell system which shell to use, e.g.
system "/bin/sh", "/some/generic/script";
[with no comment about whether it's actually appropriate to use system in this case].
There are a few things going on here. First, a child process can't change the environment of its parent. That source would only last as long as its process is around.
Here's a short program that set and export an environment variable.
#!/bin/sh
echo "PID" $$
export HERE_I_AM="JH";
Running the file does not export the variable. The file runs in its own proces. The process IDs ($$) are different in set_stuff.sh and the shell:
$ chmod 755 set_stuff.sh
$ ./set_stuff.sh
PID 92799
$ echo $$
92077
$ echo $HERE_I_AM # empty
source is different. It reads the file and evaluates it in the shell. The process IDs are the same in set_stuff.sh and the shell, so the file is actually affecting its own process:
$ unset HERE_I_AM # start over
$ source set_stuff.sh
PID 92077
$ echo $$
92077
$ echo $HERE_I_AM
JH
Now on to Perl. Calling system creates a child process (there's an exec in there somewhere) so that's not going to affect the Perl process.
$ perl -lwe 'system( "source set_stuff.sh; echo \$HERE_I_AM" );
print "From Perl ($$): $ENV{HERE_I_AM}"'
PID 92989
JH
Use of uninitialized value in concatenation (.) or string at -e line 1.
From Perl (92988):
Curiously, this works even though your version doesn't. I think the different is that in this there are no special shell metacharacters here, so it tries to exec the program directory, skipping the shell it just used for my more complicated string:
$ perl -lwe 'system( "source set_stuff.sh" ); print $ENV{HERE_I_AM}'
Can't exec "source": No such file or directory at -e line 1.
Use of uninitialized value in print at -e line 1.
But, you don't want a single string in that case. The list form is more secure, but source isn't a file that anything can execute:
$ which source # nothing
$ perl -lwe 'system( "source", "set_stuff.sh" ); print "From Perl ($$): $ENV{HERE_I_AM}"'
Can't exec "source": No such file or directory at -e line 1.
Use of uninitialized value in concatenation (.) or string at -e line 1.
From Perl (93766):
That is, you can call source, but as something that invokes the shell.
Back to your problem. There are various ways to tackle this, but we need to get the output of the program. Instead of system, use backticks. That's a double-quoted context so I need to protect some literal $s that I want to pass as part of the shell commans
$ perl -lwe 'my $o = `echo \$\$ && source set_stuff.sh && echo \$HERE_I_AM`; print "$o\nFrom Perl ($$): $ENV{HERE_I_AM}"'
Use of uninitialized value in concatenation (.) or string at -e line 1.
93919
From Shell PID 93919
JH
From Perl (93918):
Inside the backticks, you get what you like. The shell program can see the variable. Once back in Perl, it can't. But, I have the output now. Let's get more fancy. Get rid of the PID stuff because I don't need to see that now:
#!/bin/sh
export HERE_I_AM="JH";
And the shell command creates some output that has the name and value:
$ perl -lwe 'my $o = `source set_stuff.sh && echo HERE_I_AM=\$HERE_I_AM`; print $o'
HERE_I_AM=JH
I can parse that output and set variables in Perl. Now Perl has imported part of the environment of the shell program:
$ perl -lwe 'my $o = `source set_stuff.sh && echo HERE_I_AM=\$HERE_I_AM`; for(split/\R/,$o){ my($k,$v)=split/=/; $ENV{$k}=$v }; print "From Perl: $ENV{HERE_I_AM}"'
From Perl: JH
Let's get the entire environment, though. env outputs every value in the way I just processed it:
$ perl -lwe 'my $o = `source set_stuff.sh && env | sort`; print $o'
...
DISPLAY=:0
EC2_PATH=/usr/local/ec2/ec2-api-tools
EDITOR=/usr/bin/vi
...
I have a few hundred varaibles set in the shell, and I don't want to expose most of them. Those are all set by the Perl process, so I can temporarily clear out %ENV:
$ perl -lwe 'local %ENV=(); my $o = `source set_stuff.sh && env | sort`; print $o'
HERE_I_AM=JH
PWD=/Users/brian/Desktop/test
SHLVL=1
_=/usr/bin/env
Put that together with the post processing code and you have a way to pass that information back up to the parent.
This is, by the way, similar to how you'd pass variables back up to a parent shell process. Since that output is already something the shell understands, you use the shell's eval instead of parsing it.
You can't. source is a shell function that 'imports' the contents of that script into your current environment. It's not an executable.
You can replicate some of it's functionality by rolling your own - run or parse whatever you're 'sourcing' and capture the result:
print `. file_to_source; echo $somevar`;
or similar.

Resources