How do I get command-line Perl to accept shell variables? - bash

I can do math like
perl -e 'print 5253413/39151' -l
But I don't quite get how to take advantage of Perl's ability to do math with my own predefined bash variables. I've tried
var1=$(some wc command that yields a number); var1=$(some wc that yields another number)
perl -e 'print var1/var2' -l
But it doesn't work

There are two main ways to do this.
Within the Perl code you can use the %ENV built-in hash to access environment variables that are exported from the shell
$ export var1=5253413
$ export var2=39151
$ perl -E 'say $ENV{var1}/$ENV{var2}'
134.183366963807
You can use the shell interpolation facility to insert the value of a shell variable into a command
This is best done as parameters to the perl one-liner rather than introducing the values directly into the code
$ var1=5253413
$ var2=39151
$ perl -E '($v1, $v2) = #ARGV; say $v1/$v2' $var1 $var2
134.183366963807

Two less common ways to do this make use of long-standing perl features.
The first is the core module Env, which ties process environment variables to perl variables:
sh$ export VAR1=1000
sh$ export VAR2=33
sh$ perl -MEnv -E 'say $VAR1/$VAR2' # imports all environ vars
333.333333333333
sh$ perl -MEnv=VAR1,VAR2 -E 'say $VAR1/$VAR2' # imports only VAR1, VAR2
333.333333333333
Note that the variables need to be present in the environment inherited by the perl process, for example with export VAR as above, or explicitly for a single command (as by FOO=hello perl -MEnv -E 'say $FOO').
The second and rather more obscure way is to use use perl's -s switch to set arbitrary variables from the command line:
sh$ VAR1=1000
sh$ VAR2=33
sh$ perl -s -E 'say $dividend/$divisor' -- -dividend=$VAR1 -divisor=$VAR2
333.333333333333
awk does something similar with its -v switch.

I believe the spirit of the question is to pass variables without exported ENV vars.
Beside using perl -s -e expression -perlvar=val, below is code that uses two other mechanisms to pass the variable to perl.
a=x; b=N; c=z;
b=y perl -e '$pa='$a';' -e "\$pc=$c;" -e 'print "$pa$ENV{b}$pc\n";'
echo $a$b$c
Passing a and c is same, only the quoting is different. When passing using chained expressions, like this, it is important to end the expression with semi-colon; because, they flow into one expression at the end.
Passing b is done by ENV, but instead of using the exported value, it is passed directly into perl's ENV by giving the assignment before the command on the same command-line.
Last the echo command is to emphasize how the shell's definition of $b is unchanged.
Using the mechanism of b's passing, we arrive at a more secure solution, because the process's ENV data cannot be checked for the value, and it will not be seen in the command-line argument list.

Related

Perl replace content of a file with variable

I have a a.md file:
Start file
```
replace content here by a variable or result of a command such as pwd,
the whole content in this code block should be something like /Users/danny/MyProjects/CurrentDir
```
End file
And a script in update.sh to update file above:
#!/bin/sh
t=$(pwd)
echo $t
perl -i -p0e 's/(?<=```shell\n)[\s\S]*(?=\n```)/$t/s' a.md
It works fine with a string but I cannot replace with a variable such as $(pwd).
A Perl command-line program in that shell script cannot use variables from the script just so but we need to pass variables to it somehow.
There are a few ways to do that,† and perhaps using -s switch is simplest here
#!/bin/bash
t=$(pwd)
echo $t
perl -i -0777 -s -pe's/(?<=```shell\n)[\s\S]*(?=\n```)/$d/s' -- -d="$t" a.md
# or
# perl -i -0777 -s -pe'...' -- -d="$(pwd)" a.md
The -s for perl enables a basic support for switches for the program itself.
So that -d=... becomes available as $d in the program, either with the assigned value, here the value of the bash variable $t, or 1 if not assigned (-d). We can pass multiple variables this way, each in its own switch.
The -- after the program mark the start of arguments.
We do not need a shell variable but can directly use a command output, -var="$(pwd)".
† Aside from using storage (files, databases, etc) or pipes, there are two more ways to directly pass arguments given to this command-line ("one-liner") program
Pass the variable as an argument and read it from #ARGV in the program
t=$(pwd)
perl -i -0777 -s -pe'BEGIN { $d = shift }; s/.../$d/s' "$t" a.md
We need to also remove it from #ARGV so that the files can then be processed, which is what shift does, and need to do this in a BEGIN block since -p sets a loop.
Export a variable in bash making it an environment variable and a Perl script can then use that via %ENV variable
export t=$(pwd)
perl -i -0777 -s -pe's/.../$ENV{t}/s' a.md
Note, it's t in the %ENV hash ($ENV{t}), the name of the variable, not $t (value)
See for example this post and this post

Serialize a subset of environment variables

I'm trying to export some environment variables for use by a TomCat process.
There's a few ways to do this (I know how to solve the overall problem), but it bugged me that I didn't know how to do this particular shell task.
Tomcat recommends that all your environment customizations should be exported by "$CATALINA_HOME/bin/setenv.sh".
This whole thing is gonna be stuffed into a Docker container, so the only parameterizability will be via Docker env variables (let's assume for this task that I don't want to use volume mounts or create setenv.sh during the build process).
First, observe that docker run -e can be used to pass environment into the container:
🍔 docker run -eMY_VAR=SUP alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=a528b6fc264b
MY_VAR=SUP
no_proxy=*.local, 169.254/16
HOME=/root
If we wanted to copy all of that env into setenv.sh, it's as simple as:
SETENV="/usr/local/tomcat/bin/setenv.sh"
echo '#!/bin/sh' > "$SETENV"
echo 'export -p' >> "$SETENV"
env >> "$SETENV"
But copying everything somewhat defeats the point of setenv.sh -- which is, to give your tomcat process a clean environment, with only intentional customizations.
So, we can agree on a convention for "which env vars are ones that we want to pass through to setenv.sh". Everything prefixed with MY_.
And now we get to an interesting shell problem.
env | grep '^MY_' | sed 's/^MY_/EXPORT /'
This gets us pretty close. Output looks like:
🍔 docker run -e MY_VAR=hey alpine sh -c "env | grep '^MY_' | sed 's/^MY_/EXPORT /'"
EXPORT VAR=hey
So, we've selected from the env command: only env vars prefixed with MY_. And we can redirect that output to setenv.sh.
Why do I say "pretty close"? Looks like we're done, right?
Try this for size:
🍔 docker run -e MY_VAR='multi
quote> line
quote> string' alpine sh -c "env | grep '^MY_' | sed 's/^MY_/EXPORT /'"
EXPORT VAR=multi
The script only worked for a simple subset of possibilities. i.e. we only managed to export the first line of our multi-line string.
For your convenience: env output for multi-line strings looks like this:
🍔 docker run -e MY_VAR='multi
line
string' alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=0d0afaac6bec
MY_VAR=multi
line
string
no_proxy=*.local, 169.254/16
HOME=/root
I hesitate to try and tackle this using awk; there may be further string escaping complications that I have not considered.
I wonder whether there's a better way altogether to select & serialize a subset of exported environment?
EDIT: I negligently tagged this as a bash question, when really my intention was to pose an sh question. Specifically my intention is to get something that will work with no dependencies other than those that come with the alpine docker image. i.e. BusyBox sh, sed, grep, awk, env.
I've retained the bash tag so as not to punish the initial answer that was submitted when this was a bash-only question.
But I will give preference to an sh-compatible answer, and in particular to one that works with just the BusyBox UNIX utils.
So you need several things:
Enumerate the environment variables and select a subset.
For each selected environment variable, emit sh code that sets the variable to the desired value.
You can use export -p if you want to export all variables in a form that can be read back in, but parsing it to select only certain variables is harder. One way to make use of export -p is to unset the other variables. This only works if none of the environment variables is read-only, but you can work around that by running a separate shell instance (as opposed to a subshell).
To gather the list of variables to unset, you only need to get a superset of the list of all environment variables, and remove the ones you want to keep. You can easily do that by filtering the env output. I do that with a simple grep, you may want to use more complex code if your criteria for inclusion are more complex than “begins with a specific prefix”.
The occasional false positive due to a variable containing a newline followed by a valid variable name and an equal sign will only lead to calling unset on a non-existent variable, which does nothing. The desired variables are removed from the exclusion list, so the final output will never omit a desired variable.
excluded=$(env | LC_ALL=C sed -n 's/^\([A-Z_a-z][0-9A-Z_a-z]*\)=.*/\1/p' |
grep -v 'MY_')
sh -c 'unset $1; export -p' sh "$excluded" >setenv.sh
Dash prints an extra export PATH (with no value) if PATH was in the environment when it was invoked. If that bothers you, change sh -c … to (unset PATH; sh -c …).
Assuming GNU grep:
grep --null '^MY_' </proc/self/environ
...will emit your environment variables in NUL-delimited form (newlines intact).
Similarly, if you have bash:
while IFS= read -r -d '' vardef; do
[[ $vardef = MY_* ]] && printf '%s\0' "$vardef"
done </proc/self/environ
Note that if these variables were set in the same shell session, you may need to create a subprocess for /proc/self/environ to be updated:
(while IFS= read -r -d '' vardef; do
[[ $vardef = MY_* ]] && printf '%s\0' "$vardef"
done </proc/self/environ)
alpine image doesn't ship with bash.
You can use this script to extract all MY_* variables including newline variables:
docker run -e MY_FOO=bar -e MY_VAR="multi' export MY_INJECTED='val" -e MY_VAR2=$'multi
0MY_line=val
string' alpine sh -c "awk -v RS='\06' -F= '/^MY_/{k=\$1; sub(/^[^=]+=/, \"\");
gsub(/\047/, \"\047\\\\\\047\047\"); printf \"export %s=\047%s\047\n\", k, \$0
}' /proc/self/environ"
This will output:
export MY_FOO='bar'
export MY_VAR='multi'\'' export MY_INJECTED='\''val'
export MY_VAR2='multi
0MY_line=val
string'
Here is how awk works:
-v RS='\6': sets record separator as \6 works for nul byte as well (assuming you don't have \6 in value)
-F=: sets field separator as =
/^MY_/: Only process records starting with MY_
store variable name or $1 in variable k
Using sub function get part after = in $0
Using print format output so that it can be used in $CATALINA_HOME/bin/setenv.sh file.
\047 is for printing single quote
what about
declare -p ${!MY_*}
and
declare -p ${!MY_*} | sed -r 's/^declare (-[^ ]*)* MY_/export /'
or
declare -p ${!MY_*} | sed 's/^declare \(-[^ ]*\)* MY_/export /'
EDIT posix compliant version :
some env or printenv accept -0 option to end each output line by \0 rather than a newline. Thus
env -0 | perl -ne 'BEGIN{$/="\0";$\="\n";$q="\047"}next unless /^MY_/;chomp;s/$q/$q\\$q$q/;s/=/=$q/;s/$/$q/;print'
How it works
$/ : input record separator
$\ : output record separator
$q : variable to store single quote (\047) because of surrounding single quotes in command
next : to filter "MY_" variables
chomp : removes the input separator
s/// : quote substitution
EDIT: variation of perl version in posix shell
env -0 | xargs -0 sh -c 'for entry; do [[ $entry = MY_* ]] || continue; printf "%s=\047%s\047\n" "${entry%%=*}" "$(echo "${entry#*=}" | sed '\''s/\x27/\x27\\\x27\x27/g'\'' )"; done' -

how to import my unix variable inside my perl command..?

I am running below command on Bash prompt:
bash-3.2$ x=12
bash-3.2$ echo $x
12
bash-3.2$ perl -e '$age=$x; print "Age = $age\n"'
Age =
bash-3.2$
I am not getting the age/number printed..! How shall i import my unix bash variable inside my perl command..!?
First you have to export x in the shell. Then you can access the variable from Perl as $ENV{x}.
$ x=12
$ export x
$ perl -e '$age=$ENV{x}; print "Age = $age\n"'
Age = 12
This answer on this thread accesses the variable directly through the environment, which seems like a neater way.
Still, to demonstrate a way to use direct shell substitution (which has it's uses), then the right way to do it would be like this:
perl -e '$age='"$x"'; print "Age = $age\n"'
perl sees this as it's input: perl -e $age=12; print "Age = $age\n"
The single quotes for the -e parameter prevents the shell variable from being expanded. Use $ENV{'myvar'} to get the value of shell variable $myvar
And export the variable too as noted by the other answer.

Bash - File content in perl print statement

I'm writing an format string exploit script for a vulnerable program.
I'm able to exploit the vulnerability by executing the program with the following input:
./vulnerable `perl -e 'print "\x11\x11\x11\x40\x99\x04\x08"'`'AAAAx%11$n'
Here \x40\x99\x04\x08 is the address of a variable in vulnerable.
I want to write a script that generates this input without a hard-coded address.
In my script I retrieve the address of the variable and store it in address.txt.
Then I try to call vulnerable from my script as I did before, but with the content of address.txt:
./vulnerable $(perl -e 'print "\x11\x11\x11$(<address.txt)"')'AAAAx%11$n’
The content of address.txt is \x40\x99\x04\x08 so there is something wrong with the way I provide the content of address.txt to the perl print statement.
I've also tried leave out the $() around perl:
./vulnerable `perl -e 'print "\x11\x11\x11$(<address.txt)"'`'AAAAx%11$n'
But this renders the same result.
What am I doing wrong?
Single quotes don't expand $(...).
./vulnerable $(perl -e 'print "\x11\x11\x11'$(<address.txt)'"')'AAAAx%11$n'
# <------------->
# <-------------------> <->
# <------------------------------------------------><---------->

How do I pass command line options to a Perl program with perl -e?

I want to pass command line options that start with a dash (- or --) to a Perl programm I am running with the -e flag:
$ perl -E 'say #ARGV' -foo
Unrecognized switch: -foo (-h will show valid options).
Passing arguments that don't start with a - obviously work:
$ perl -E 'say #ARGV' foo
foo
How do I properly escape those so the program reads them correctly?
I tried a bunch of variations like \-foo, \\-foo, '-foo', '\-foo', '\\-foo'. None of those work though some produce different messages. \\-foo actually runs and outputs \-foo.
You can use the -s, like:
perl -se 'print "got $some\n"' -- -some=SOME
the above prints:
got SOME
From the perlrun:
-s enables rudimentary switch parsing for switches on the command
line after the program name but before any
filename arguments (or before an argument of --). Any switch found there is removed from #ARGV and sets
the corresponding variable in the Perl program. The following program prints "1" if the program is
invoked with a -xyz switch, and "abc" if it is invoked with -xyz=abc.
#!/usr/bin/perl -s
if ($xyz) { print "$xyz\n" }
Do note that a switch like --help creates the variable "${-help}", which is not compliant with "use strict
"refs"". Also, when using this option on a script with warnings enabled you may get a lot of spurious
"used only once" warnings.
For the simple arg-passing use the --, like:
perl -E 'say "#ARGV"' -- -some -xxx -ddd
prints
-some -xxx -ddd
Just pass -- before the flags that are to go to the program, like so:
perl -e 'print join("/", #ARGV)' -- -foo bar
prints
-foo/bar

Resources