Share a variable among bash scripts - bash

I have 2 bash scripts:
set.sh
export MYVAR="MYVALUE"
get.sh
echo "MYVAR: $MYVAR"
usage
> ./set.sh && ./get.sh
I'm trying to share the variable among the scripts. The real code is more complicated and involves more than 2 scripts, so I cannot call the second script from the first one like that
export MYVAR="MYVALUE"
./get.sh

I can think of several tries:
Via third script: Use a third environment script and source it in set.sh and get.sh. Probably you can't do that because set.sh has a lot of logic.
Via code: source set.sh in get.sh and add specific logic to set.sh to recognize if set.sh is sourced or executed.
Via pipe: Have you thought about using a pipe for interprocess communication between set.sh and get.sh? You could send MYVAR from set.sh to get.sh through the pipe, and only start executing in get.sh once this information has arrived.
Via file: Use a file to transmit the value of MYVAR by writing it from set.sh to the file and reading from the file in get.sh.
Via output: Write MYVAR to the output in set.sh and consume (or parse) the output of set.sh in get.sh.

Set them as arguments to the scripts
./set.sh "$MYVAR" && ./get.sh "$MYVAR"

Related

Execute bash within some text file [duplicate]

This question already has answers here:
Forcing bash to expand variables in a string loaded from a file
(13 answers)
Closed 1 year ago.
I would like to create a templating system that executes bash within a text file.
For example, let's consider we created a simple template.yaml file:
my_path: $(echo ${PATH})
my_ip: $(curl -s http://whatismyip.akamai.com/)
some_const: "foo bar"
some_val: $(echo -n $MY_VAR | base64)
The desire is to execute each one, such that the result may look like:
my_path: /Users/roman/foo
my_ip: 1.2.3.4
some_const: "foo bar"
some_val: ABC
How would I go about doing such a substitution?
Reasons for wanting this:
There are many values, and doing something like a sed or envsubst isn't practical
It would be common to apply a series of piped transformations
The configuration file would be populated from numerous sources, all of them essentially bash commands
I do need to create a yaml file of a specific format (ultimately used by another tool)
I could create aliases etc to increase readability
By having it execute in it's own shell none of the semi-sensitive values are stored as in history or as files.
I'm not married to this approach, and would happily attempt a recommendation that fulfils the reasons.
This might work, you can try though:
you can create a script: script.sh which will take one argument as .yaml file and will expand the variables inside that file:
script.sh :
echo 'cat <<EOF' > temp.sh
cat "$1" >> temp.sh
echo 'EOF' >> temp.sh
bash temp.sh
rm temp.sh
and you can invoke the script as from the command line : ./script.sh template.yaml
Thank you to #joshmeranda for pointing me in the right direction, this solved my problem
echo -e "$(eval "echo -e \"`<template.yaml`\"")"
While eval can be dangerous, in my case its usage is controlled.

Assigning a variable in a shell script for use outside of the script

I have a shell script that sets a variable. I can access it inside the script, but I can't outside of it. Is it possible to make the variable global?
Accessing the variable before it's created returns nothing, as expected:
$ echo $mac
$
Creating the script to create the variable:
#!/bin/bash
mac=$(cat \/sys\/class\/net\/eth0\/address)
echo $mac
exit 0
Running the script gives the current mac address, as expected:
$ ./mac.sh
12:34:56:ab:cd:ef
$
Accessing the variable after its created returns nothing, NOT expected:
$ echo $mac
$
Is there a way I can access this variable at the command line and in other scripts?
A child process can't affect the parent process like that.
You have to use the . (dot) command — or, if you like C shell notations, the source command — to read the script (hence . script or source script):
. ./mac.sh
source ./mac.sh
Or you generate the assignment on standard output and use eval $(script) to set the variable:
$ cat mac.sh
#!/bin/bash
echo mac=$(cat /sys/class/net/eth0/address)
$ bash mac.sh
mac=12:34:56:ab:cd:ef
$ eval $(bash mac.sh)
$ echo $mac
12:34:56:ab:cd:ef
$
Note that if you use no slashes in specifying the script for the dot or source command, then the shell searches for the script in the directories listed in $PATH. The script does not have to be executable; readable is sufficient (and being read-only is beneficial in that you can't run the script accidentally).
It's not clear what all the backslashes in the pathname were supposed to do other than confuse; they're unnecessary.
See ssh-agent for precedent in generating a script like that.

Julia from bash script

Can I create one instance of Julia and use it to run multiple Julia scripts from bash script?
#!/bin/bash
J=getjuliainstance()
J.run(temp.jl)
J.run(j1.jl)
J.run(j2.jl)
J.run(j3.jl)
J.exit()
I could run all of them from inside a master Julia script but that is not the intent.
This is to circumvent Julia's load time for the first script so that runtime of subsequent scripts can be timed consistently.
Any way to spawn a single process and reuse it to launch scripts? From shell script only please!
One of the solutions (allows for a tail -f):
julia <pipe 2>&1 | tee submission.log > /dev/null &
You can try named pipes:
$ mkfifo pipe # create named pipe
$ sleep 10000 > pipe & # keep pipe alive
[1] 11521
$ julia -i <pipe & # make Julia read from pipe
[2] 11546
$ echo "1+2" >pipe
$ 3
$ echo "rand(10)" >pipe
$ 10-element Array{Float64,1}:
0.938396
0.690747
0.615235
0.298277
0.780966
0.775423
0.197329
0.136582
0.302169
0.607562
$
You can send any commands to Julia using echo.
If you use stdout for Julia output then you have to press enter when Julia writes something there to return to prompt.
Stop Julia by writing echo "exit()" >pipe. If you want to execute a file this way use include function.
EDIT: it seems that you even do not have to use -i if you run Julia this way.
EDIT2: I did not notice that actually you want to use only one bash script (not an interactive mode). In such a case it should be even simpler to use named pipes.

Is it possible to perform shell injection through a read and/or to break out of quotes?

Consider this example of (attempted) shell injection:
test1.sh:
#!/bin/sh
read FOO
echo ${FOO}
z.dat:
foo && sleep 1 && echo 'exploited'
Then run:
cat z.dat | ./test.sh
On my machine (Ubuntu w/bash) the payload is always (correctly) treated as a single string and never executes the malicious sleep and echo commands.
Question 1: Is it possible to modify z.dat so that test.sh is vulnerable to injection? In particular are there specific shells that might be vulnerable?
Question 2: If so, is changing the test script to quote the variable (shown below) an absolute defense?
test2.sh:
#!/bin/sh
read FOO
echo "${FOO}"
Thanks!
Not according to: https://developer.apple.com/library/mac/documentation/OpenSource/Conceptual/ShellScripting/ShellScriptSecurity/ShellScriptSecurity.html
Search for 'Backwards Compatibility Example'

same variable in different scripts in bash

What better way to use same variable in two different bash scripts?
Simple example:
./set.sh 333
./get.sh
> 333
./set.sh 111
./get.sh
> 111
And how initialize that variable first time?
UPD:
$ cat get.sh
echo "$var"
$ cat set.sh
export var="$1"
$ chmod +x set.sh get.sh
$ source set.sh
$ ./set.sh u
$./get.sh
$ source ./set.sh 2
$ ./get.sh
2
You can have your scripts as:
cat set.sh
export var="$1"
cat get.sh
echo "$var"
chmod +x set.sh get.sh
Then call them:
. ./set.sh 333
./get.sh
333
Please note that . ./set.sh OR source ./set.sh is called sourcing in the script which makes sure that set.sh is executed without creating a sub-shell and variables set in that script are accessible in the other scripts.
What you need to understand is the lifetime of a shell variable (or an environment variable as you are using).
When you run a sub-shell, you are running a child process of the shell, and any shell variables that you set exist for the lifetime of the script. Any environment variables (shell variables are "promoted" to environment variable by the use of export) are copied into the environment of the child process - so changes to environment variables in a child process have NO effect on the value in the parent process.
So what you need to use is source which executes the contents of the script in the current shell (no sub-shell is spawned). Always source set.sh and you should be OK
You have to store that number in a file.
A called shell script is not able to change the variables of the calling shell.
Another way is to source the shell script instead of running it as a separate process.
But maybe you should explain why you think, that you need that feature. Maybe some totally different solution is even better.

Resources