pass GLOBIGNORE to a bash invocation - bash

The bash manual page states
If the shell is started with the effective user (group) id not equal to
the real user (group) id, [...] the SHELLOPTS, BASHOPTS, CDPATH, and
GLOBIGNORE variables if they appear in the environment, are ignored
So normally this happens.
> export GLOBIGNORE='*t*'
> echo *
afile
> bash -i
>> # look, the variable is passed through
>> $ echo $GLOBIGNORE
*t*
>> # but to no effect
>> $ echo *
afile anotherfile athirdfile
I do not think it would make much sense to fake the real user id to enable passing GLOBIGNORE and a number of other unwanted side-effects.
Is it possibile to make the subshell respect an exported GLOBIGNORE?

Some other shell hacks may come to the rescue. All these solutions require at least to modify the shell invocation, but make the subshell start readily prepared.
As shell startup is different on interactive shells, two strategies are needed.
Interactive
When starting an interactive session, bash normally sources the default ~/.bashrc file. There is a switch to change where to look for this file. This can be exploited without loss as long as the script passed in there redirects to the original location.
> echo 'GLOBIGNORE=*t*' > rc
> echo 'source ~/.bashrc' >> rc
> bash --rcfile rc -i
>> echo *
Non-Interactive, Modifyable Command String
As Cyrus already pointed out, one could simply augment the command with the assignment so that it happens inside the subshell to begin with.
> bash -c 'GLOBIGNORE="*t*" ; echo *'
Fully Automated
If modification of the passed commands should be avoided, another special variable can be employed. It is called BASH_ENV and denotes a script to source when starting up a non-interactive session. With this, a strategy similar to --rcfile arises.
> echo 'GLOBIGNORE=*t*' > rc
> BASH_ENV=rc bash -c "echo *"
Or, to be even more sleazy and avoid the temporary file rc, we can force piping, which is clearly not intended as the value - is not regarded as the standard input.
> echo 'GLOBIGNORE=*t*' | BASH_ENV=/dev/stdin bash -c "echo *"

Related

How can I save environment variables in a file using BASH? [duplicate]

I have two shell scripts that I'd like to invoke from a C program. I would like shell variables set in the first script to be visible in the second. Here's what it would look like:
a.sh:
var=blah
<save vars>
b.sh:
<restore vars>
echo $var
The best I've come up with so far is a variant on "set > /tmp/vars" to save the variables and "eval $(cat /tmp/vars)" to restore them. The "eval" chokes when it tries to restore a read-only variable, so I need to grep those out. A list of these variables is available via "declare -r". But there are some vars which don't show up in this list, yet still can't be set in eval, e.g. BASH_ARGC. So I need to grep those out, too.
At this point, my solution feels very brittle and error-prone, and I'm not sure how portable it is. Is there a better way to do this?
One way to avoid setting problematic variables is by storing only those which have changed during the execution of each script. For example,
a.sh:
set > /tmp/pre
foo=bar
set > /tmp/post
grep -v -F -f/tmp/pre /tmp/post > /tmp/vars
b.sh:
eval $(cat /tmp/vars)
echo $foo
/tmp/vars contains this:
PIPESTATUS=([0]="0")
_=
foo=bar
Evidently evaling the first two lines has no adverse effect.
If you can use a common prefix on your variable names, here is one way to do it:
# save the variables
yourprefix_width=1200
yourprefix_height=2150
yourprefix_length=1975
yourprefix_material=gravel
yourprefix_customer_array=("Acme Plumbing" "123 Main" "Anytown")
declare -p $(echo ${!yourprefix#}) > varfile
# load the variables
while read -r line
do
if [[ $line == declare\ * ]]
then
eval "$line"
fi
done < varfile
Of course, your prefix will be shorter. You could do further validation upon loading the variables to make sure that the variable names conform to your naming scheme.
The advantage of using declare is that it is more secure than just using eval by itself.
If you need to, you can filter out variables that are marked as readonly or select variables that are marked for export.
Other commands of interest (some may vary by Bash version):
export - without arguments, lists all exported variables using a declare format
declare -px - same as the previous command
declare -pr - lists readonly variables
If it's possible for a.sh to call b.sh, it will carry over if they're exported. Or having a parent set all the values necessary and then call both. That's the most secure and sure method I can think of.
Not sure if it's accepted dogma, but:
bash -c 'export foo=bar; env > xxxx'
env `cat xxxx` otherscript.sh
The otherscript will have the env printed to xxxx ...
Update:
Also note:
man execle
On how to set environment variables for another system call from within C, if you need to do that. And:
man getenv
and http://www.crasseux.com/books/ctutorial/Environment-variables.html
An alternative to saving and restoring shell state would be to make the C program and the shell program work in parallel: the C program starts the shell program, which runs a.sh, then notifies the C program (perhaps passing some information it's learned from executing a.sh), and when the C program is ready for more it tells the shell program to run b.sh. The shell program would look like this:
. a.sh
echo "information gleaned from a"
arguments_for_b=$(read -r)
. b.sh
And the general structure of the C program would be:
set up two pairs of pipes, one for C->shell and one for shell->C
fork, exec the shell wrapper
read information gleaned from a on the shell->C pipe
more processing
write arguments for b on the C->shell pipe
wait for child process to end
I went looking for something similar and couldn't find it either, so I made the two scripts below. To start, just say shellstate, then probably at least set -i and set -o emacs which this reset_shellstate doesn't do for you. I don't know a way to ask bash which variables it thinks are special.
~/bin/reset_shellstate:
#!/bin/bash
__="$PWD/shellstate_${1#_}"
trap '
declare -p >"'"$__"'"
trap >>"'"$__"'"
echo cd \""$PWD"\" >>"'"$__"'" # setting PWD did this already, but...
echo set +abefhikmnptuvxBCEHPT >>"'"$__"'"
echo set -$- >>"'"$__"'" # must be last before sed, see $s/s//2 below
sed -ri '\''
$s/s//2
s,^trap --,trap,
/^declare -[^ ]*r/d
/^declare -[^ ]* [A-Za-z0-9_]*[^A-Za-z0-9_=]/d
/^declare -[^ ]* [^= ]*_SESSION_/d
/^declare -[^ ]* BASH[=_]/d
/^declare -[^ ]* (DISPLAY|GROUPS|SHLVL|XAUTHORITY)=/d
/^declare -[^ ]* WINDOW(ID|PATH)=/d
'\'' "'"$__"'"
shopt -op >>"'"$__"'"
shopt -p >>"'"$__"'"
declare -f >>"'"$__"'"
echo "Shell state saved in '"$__"'"
' 0
unset __
~/bin/shellstate:
#!/bin/bash
shellstate=shellstate_${1#_}
test -s $shellstate || reset_shellstate $1
shift
bash --noprofile --init-file shellstate_${1#_} -is "$#"
exit $?

How to run "source" command (Linux) from a perl script?

I am trying to source a script from a Perl script (script.pl).
system ("source /some/generic/script");
Please note that this generic script could be a shell, python or any other script. Also, I cannot replicate the logic present inside this generic script into my Perl script. I tried replacing system with ``, exec, and qx//. Each time I got the following error:
Can't exec "source": No such file or directory at script.pl line 18.
I came across many forums on the internet, which discussed various reasons for this problem. But none of them provided a solution. Is there any way to run/execute source command from a Perl script?
In bash, etc, source is a builtin that means read this file, and interpret it locally (a little like a #include).
In this context that makes no sense - you either need to remove source from the command and have a shebang (#!) line at the start of the shell script that tells the system which shell to use to execute that script, or you need to explicitly tell system which shell to use, e.g.
system "/bin/sh", "/some/generic/script";
[with no comment about whether it's actually appropriate to use system in this case].
There are a few things going on here. First, a child process can't change the environment of its parent. That source would only last as long as its process is around.
Here's a short program that set and export an environment variable.
#!/bin/sh
echo "PID" $$
export HERE_I_AM="JH";
Running the file does not export the variable. The file runs in its own proces. The process IDs ($$) are different in set_stuff.sh and the shell:
$ chmod 755 set_stuff.sh
$ ./set_stuff.sh
PID 92799
$ echo $$
92077
$ echo $HERE_I_AM # empty
source is different. It reads the file and evaluates it in the shell. The process IDs are the same in set_stuff.sh and the shell, so the file is actually affecting its own process:
$ unset HERE_I_AM # start over
$ source set_stuff.sh
PID 92077
$ echo $$
92077
$ echo $HERE_I_AM
JH
Now on to Perl. Calling system creates a child process (there's an exec in there somewhere) so that's not going to affect the Perl process.
$ perl -lwe 'system( "source set_stuff.sh; echo \$HERE_I_AM" );
print "From Perl ($$): $ENV{HERE_I_AM}"'
PID 92989
JH
Use of uninitialized value in concatenation (.) or string at -e line 1.
From Perl (92988):
Curiously, this works even though your version doesn't. I think the different is that in this there are no special shell metacharacters here, so it tries to exec the program directory, skipping the shell it just used for my more complicated string:
$ perl -lwe 'system( "source set_stuff.sh" ); print $ENV{HERE_I_AM}'
Can't exec "source": No such file or directory at -e line 1.
Use of uninitialized value in print at -e line 1.
But, you don't want a single string in that case. The list form is more secure, but source isn't a file that anything can execute:
$ which source # nothing
$ perl -lwe 'system( "source", "set_stuff.sh" ); print "From Perl ($$): $ENV{HERE_I_AM}"'
Can't exec "source": No such file or directory at -e line 1.
Use of uninitialized value in concatenation (.) or string at -e line 1.
From Perl (93766):
That is, you can call source, but as something that invokes the shell.
Back to your problem. There are various ways to tackle this, but we need to get the output of the program. Instead of system, use backticks. That's a double-quoted context so I need to protect some literal $s that I want to pass as part of the shell commans
$ perl -lwe 'my $o = `echo \$\$ && source set_stuff.sh && echo \$HERE_I_AM`; print "$o\nFrom Perl ($$): $ENV{HERE_I_AM}"'
Use of uninitialized value in concatenation (.) or string at -e line 1.
93919
From Shell PID 93919
JH
From Perl (93918):
Inside the backticks, you get what you like. The shell program can see the variable. Once back in Perl, it can't. But, I have the output now. Let's get more fancy. Get rid of the PID stuff because I don't need to see that now:
#!/bin/sh
export HERE_I_AM="JH";
And the shell command creates some output that has the name and value:
$ perl -lwe 'my $o = `source set_stuff.sh && echo HERE_I_AM=\$HERE_I_AM`; print $o'
HERE_I_AM=JH
I can parse that output and set variables in Perl. Now Perl has imported part of the environment of the shell program:
$ perl -lwe 'my $o = `source set_stuff.sh && echo HERE_I_AM=\$HERE_I_AM`; for(split/\R/,$o){ my($k,$v)=split/=/; $ENV{$k}=$v }; print "From Perl: $ENV{HERE_I_AM}"'
From Perl: JH
Let's get the entire environment, though. env outputs every value in the way I just processed it:
$ perl -lwe 'my $o = `source set_stuff.sh && env | sort`; print $o'
...
DISPLAY=:0
EC2_PATH=/usr/local/ec2/ec2-api-tools
EDITOR=/usr/bin/vi
...
I have a few hundred varaibles set in the shell, and I don't want to expose most of them. Those are all set by the Perl process, so I can temporarily clear out %ENV:
$ perl -lwe 'local %ENV=(); my $o = `source set_stuff.sh && env | sort`; print $o'
HERE_I_AM=JH
PWD=/Users/brian/Desktop/test
SHLVL=1
_=/usr/bin/env
Put that together with the post processing code and you have a way to pass that information back up to the parent.
This is, by the way, similar to how you'd pass variables back up to a parent shell process. Since that output is already something the shell understands, you use the shell's eval instead of parsing it.
You can't. source is a shell function that 'imports' the contents of that script into your current environment. It's not an executable.
You can replicate some of it's functionality by rolling your own - run or parse whatever you're 'sourcing' and capture the result:
print `. file_to_source; echo $somevar`;
or similar.

Using command substitution or similar, but still having script exit (using set -e)

Bash doesn't seem to pass the "exit on error" environment flag into command substitution shells.
I am using a large number of command substitutions (to get around bash's lack of return values), but I'd still like the whole script to go down if something in the subshell fails.
So, for example:
set -e
function do_internet {
curl not.valid.address
}
answer=$(do_internet)
I'd like the script to stop there and then, and not continue.
(I hoped that setting -e would stop from having to put '|| die' on everything.
Am I doing something wrong; and/or is there any way around this?
Here's a little example:
#!/bin/bash
set -e
echo "You should only see this line, and not any other line."
function foo {
false
echo "The above line is false. Figure that one out, Plato."
}
bar=$(foo)
echo $bar
It prints both lines.
(Using GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu))
There is a difference in handling of -e between subshells created with (...), as in Why doesn't bash flag -e exit when a subshell fails?, and subshells created with command substitution $(...), as in the OP.
According to the section COMMAND EXECUTION ENVIRONMENT in the bash manual (and slightly confusingly):
Subshells spawned to execute command substitutions inherit the value of the -e option from the parent shell. When not in posix mode, bash clears the -e option in such subshells.
Regardless of the posix setting, the -e only applies to the subshell created for the purposes of command substitution. So:
$ set -e
# The subshell has -e cleared
$ echo $(false; echo foo)
foo
$ set -o posix
# Now the subshell has -e, so it terminates at `false`
$ echo $(false; echo foo)
$
Nonetheless, -e does apply to the execution of a command which only sets a variable. So
set -e
a=$(false)
will terminate the shell.
However, -e does not apply to individual commands in a function. In the case of
fail() {
false
echo "failed"
}
The return value of fail is 0 (i.e. success) because the echo (which was the last command executed) succeeded. Consequently
a=$(fail) && echo ok
will set a to failed and then print ok

"< <(command-here)" shell idiom resulting in "redirection unexpected"

This command works fine:
$ bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer)
However, I don't understand how exactly stable is passed as a parameter to the shell script that is downloaded by curl. That's the reason why I fail to achieve the same functionality from within my own shell script - it gives me ./foo.sh: 2: Syntax error: redirection unexpected:
$ cat foo.sh
#!/bin/sh
bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer)
So, the questions are: how exactly this stable param gets to the script, why are there two redirects in this command, and how do I change this command to make it work inside my script?
Regarding the "redirection unexpected" error:
That's not related to stable, it's related to your script using /bin/sh, not bash. The <() syntax is unavailable in POSIX shells, which includes bash when invoked as /bin/sh (in which case it turns off nonstandard functionality for compatibility reasons).
Make your shebang line #!/bin/bash.
Understanding the < <() idiom:
To be clear about what's going on -- <() is replaced with a filename which refers to the output of the command which it runs; on Linux, this is typically a /dev/fd/## type filename. Running < <(command), then, is taking that file and directing it to your stdin... which is pretty close the behavior of a pipe.
To understand why this idiom is useful, compare this:
read foo < <(echo "bar")
echo "$foo"
to this:
echo "bar" | read foo
echo "$foo"
The former works, because the read is executed by the same shell that later echoes the result. The latter does not, because the read is run in a subshell that was created just to set up the pipeline and then destroyed, so the variable is no longer present for the subsequent echo.
Understanding bash -s stable:
bash -s indicates that the script to run will come in on stdin. All arguments, then, are fed to the script in the $# array ($1, $2, etc), so stable becomes $1 when the script fed in on stdin is run.

Can you wrapper each command in GNU's make?

I want to inject a transparent wrappering command on each shell command in a make file. Something like the time shell command. ( However, not the time command. This is a completely different command.)
Is there a way to specify some sort of wrapper or decorator for each shell command that gmake will issue?
Kind of. You can tell make to use a different shell.
SHELL = myshell
where myshell is a wrapper like
#!/bin/sh
time /bin/sh "$0" "$#"
However, the usual way to do that is to prefix a variable to all command calls. While I can't see any show-stopper for the SHELL approach, the prefix approach has the advantage that it's more flexible (you can specify different prefixes for different commands, and override prefix values on the command line), and could be visibly faster.
# Set Q=# to not display command names
TIME = time
foo:
$(Q)$(TIME) foo_compiler
And here's a complete, working example of a shell wrapper:
#!/bin/bash
RESULTZ=/home/rbroger1/repos/knl/results
if [ "$1" == "-c" ] ; then
shift
fi
strace -f -o `mktemp $RESULTZ/result_XXXXXXX` -e trace=open,stat64,execve,exit_group,chdir /bin/sh -c "$#" | awk '{if (match("Process PID=\d+ runs in (64|32) bit",$0) == 0) {print $0}}'
# EOF
I don't think there is a way to do what you want within GNUMake itself.
I have done things like modify the PATH env variable in the Makefile so a directory with my script linked to all name the bins I wanted wrapped was executed rather than the actual bin. The script would then look at how it was called and exec the actual bin with the wrapped command.
ie. exec time "$0" "$#"
These days I usually just update the targets in the Makefile itself. Keeping all your modifications to one file is usually better IMO than managing a directory of links.
Update
I defer to Gilles answer. It's a better answer than mine.
The program that GNU make(1) uses to run commands is specified by the SHELL make variable. It will run each command as
$SHELL -c <command>
You cannot get make to not put the -c in, since that is required for most shells. -c is passed as the first argument ($1) and <command> is passed as a single argument string as the second argument ($2).
You can write your own shell wrapper that prepends the command that you want, taking into account the -c:
#!/bin/sh
eval time "$2"
That will cause time to be run in front of each command. You need eval since $2 will often not be a single command and can contain all sorts of shell metacharacters that need to be expanded or processed.

Resources