Setting Environmental Variables Through a BASH Script in Jupyter - bash

I am attempting to execute a BASH script which sets needed environmental variables from within a Jupyter notebook. I understand that the magic command %env can accomplish this, but the BASH script is needed in this instance. Neither the use of !source or %system accomplishes the goal of making the environmental variables persist within the Jupyter notebook. Can this be done?

You could use python to update os variables:
Cell
! echo "export test1=\"This is the test 1\"" > test.sh
! echo "export test2=\"This is the test 2\"" >> test.sh
! cat test.sh
Result
export test1="This is the test 1"
export test2="This is the test 2"
Cell (taken from set environment variable in python script)
import os
with open('test.sh') as f:
os.environ.update(
line.replace('export ', '', 1).strip().split('=', 1) for line in f
if 'export' in line
)
! echo $test1
! echo $test2
Result
"This is the test 1"
"This is the test 2"

To permanently set a variable (eg. a key) you can set a Bash environment variable for your Jupyter notebooks by creating or editing a startup config file in the IPython startup directory.
cd ~/.ipython/profile_default/startup/
vim my_startup_file.py
The file will be run on Jupyter startup (see the README in the same directory). Here is what the startup .py file should contain:
1 import os
2 os.environ['AWS_ACCESS_KEY_ID']='insert_your_key_here'
3 os.environ['AWS_SECRET_ACCESS_KEY']='another_key'
Now inside a Jupyter notebook you can call these environment variables, eg.
#Inside a Jupyter Notebook cell
import os
session = boto3.session.Session(
aws_access_key_id=os.getenv('AWS_ACCESS_KEY_ID'),
aws_secret_access_key=os.getenv('AWS_SECRET_ACCESS_KEY'),
region_name='us-east-1'
)
You will need to restart your kernel for the changes to be created.

Related

Problem modifying PATH variable in fish config

I have the following configuration in ~/.config/fish/conf.d/python.fish:
# Initialise pyenv if found
echo "Running python config"
if status --is-interactive && test -d "$HOME/.pyenv"
echo "Inside pyenv if"
set -pxg PATH $HOME/.pyenv/bin $HOME/.pyenv/shims
source ("$HOME/.pyenv/bin/pyenv" init - | psub)
echo "End pyenv if"
end
# Poetry settings
if status --is-interactive && test -d "$HOME/.pyenv"
set -xg POETRY_VIRTUALENVS_IN_PROJECT 1
set -xg POETRY_VIRTUALENVS_CREATE 1
set -pxg PATH $HOME/.poetry/bin
end
echo "End python config"
Every single echo command in the configuration is executed when I am creating a new shell, but the PATH variable is not modified. However, the POETRY_ variables show up as expected.
But things work as expected if I source the file in an existing shell with
source ~/.config/fish/conf.d/python.fish
What could possibly be wrong here?
Update: The problem only occurs inside of tmux, and not when I am starting terminals like Alacritty or Kitty. But all echo commands is still run inside tmux.
This was a Debian bug - they added configuration that reset $PATH, and it happened after the user's configuration snippets but before config.fish.
This appears to be fixed in Debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1000199. Now they merely add the default directories instead of replacing $PATH entirely.
Because config.fish is read after all the conf.d files, you can always use config.fish to override anything done in the snippets, like if your vendor breaks things.

How can I pass command line arguments when activating a conda environment in a shell script

I am on Linux, Python 3.6.
I have a shell script that looks like this:
#!/bin/bash
instance=$1
export instance
echo "Instance is" $instance
. /home/xyz/setenvvars.sh
source activate myenv
echo arg1 $1
echo env $instance
python myprg.py $1
python myprg.py $instance
It seems after I activate the conda environment, the command line argument this shell script received is not available in the activated environment. How can I pass the command line argument this script originally received to newly activated environment. The 2 echos after the activate show blanks.
TIA!

Variable is not getting exported [duplicate]

This question already has an answer here:
bash - export doesn't work
(1 answer)
Closed 7 years ago.
I am running the following simple code in a shell script , but it seems like it cant export the variable :
#!/bin/bash
echo -n "Enter AWS_ACCESS_KEY_ID: "
read aws_access_key
export AWS_ACCESS_KEY_ID=$aws_access_key
After that I take the input from the user ,but when I run echo $AWS_ACCESS_KEY_ID I get a blank value .
Run your script in the current shell by using:
source your-script # this runs your-script in the existing shell
...or, if using a POSIX shell...
. your-script # likewise; that space is intentional!
not
./your-script # this starts a new shell just for `your-script`; its variables
# are lost when it exits!
...if you want variables it sets to be available to the shell that calls it.
To be clear, export puts a variable in the current process's environment -- but environment variables are propagated down to child processes, not up to parent processes.
Now, if your goal is to define an interactive command that's easy to call, you might want to consider an entirely different approach altogether -- putting a function in your .bashrc:
awsSetup() {
echo -n "Enter AWS_ACCESS_KEY_ID: "
read && [[ $REPLY ]] && export AWS_ACCESS_KEY_ID=$REPLY
}
...after which the user with this in their .bashrc can run awsSetup, which will run in the current shell.

Simple Bash Script Error and Advice - Saving Environment Variables in Linux

I am working on a project that is hosted in Heroku. The app is hard coded to use Amazon S3 and looks for the keys in environment variables. This is what I wrote after looking at some examples and I am not sure why its not working.
echo $1
if [ $1 != "unset" ]; then
echo "set"
export AMAZON_ACCESS_KEY_ID=XXXXXXXXXXXX
export AMAZON_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
export S3_BUCKET_NAME=XXXXXXXXX
else
echo "unset"
export AMAZON_ACCESS_KEY_ID=''
export AMAZON_SECRET_ACCESS_KEY=''
export S3_BUCKET_NAME=''
fi
When running the script it goes to the set section. But when inspecting through echo $AMAZON_ACCESS_KEY_ID # => ''.
I am not sure what is causing the issue. I will be interested in...
A fix for this...
A way to extract and add heroku config variables in the the env in an easier way.
You need to source the script, not run it as a child. If you run the script directly, its environment disappears when it ends. Sourcing the script causes it to be executed in the current environment. help source for more information.
Example:
$ VAR=old_value
$ cat script.sh
#!/bin/bash
export VAR=new_value
$ ./script.sh
$ echo $VAR
old_value
$ source script.sh
$ echo $VAR
new_value
Scripts executed with source don't need to be executable nor do they need the "shebang" line (#!/bin/bash) because they are not run as separate processes. In fact, it is probably a good idea to not make them executable in order to avoid them being run as commands, since that won't work as expected.

How to write a bash script to set global environment variable?

Recently I wrote a script which sets an environment variable, take a look:
#!/bin/bash
echo "Pass a path:"
read path
echo $path
defaultPath=/home/$(whoami)/Desktop
if [ -n "$path" ]; then
export my_var=$path
else
echo "Path is empty! Exporting default path ..."
export my_var=$defaultPath
fi
echo "Exported path: $my_var"
It works just great but the problem is that my_var is available just locally, I mean in console window where I ran the script.
How to write a script which allow me to export global environment variable which can be seen everywhere?
Just run your shell script preceded by "." (dot space).
This causes the script to run the instructions in the original shell. Thus the variables still exist after the script finish
Ex:
cat setmyvar.sh
export myvar=exists
. ./setmyvar.sh
echo $myvar
exists
Each and every shell has its own environment. There's no Universal environment that will magically appear in all console windows. An environment variable created in one shell cannot be accessed in another shell.
It's even more restrictive. If one shell spawns a subshell, that subshell has access to the parent's environment variables, but if that subshell creates an environment variable, it's not accessible in the parent shell.
If all of your shells need access to the same set of variables, you can create a startup file that will set them for you. This is done in BASH via the $HOME/.bash_profile file (or through $HOME/.profile if $HOME/.bash_profile doesn't exist) or through $HOME/.bashrc. Other shells have their own set of startup files. One is used for logins, and one is used for shells spawned without logins (and, as with bash, a third for non-interactive shells). See the manpage to learn exactly what startup scripts are used and what order they're executed).
You can try using shared memory, but I believe that only works while processes are running, so even if you figured out a way to set a piece of shared memory, it would go away as soon as that command is finished. (I've rarely used shared memory except for named pipes). Otherwise, there's really no way to set an environment variable in one shell and have another shell automatically pick it up. You can try using named pipes or writing that environment variable to a file for other shells to pick it up.
Imagine the problems that could happen if someone could change the environment of one shell without my knowledge.
Actually I found an way to achieve this (which in my case was to use a bash script to set a number of security credentials)
I just call bash from inside the script and the spawned shell now has the export values
export API_USERNAME=abc
export API_PASSWORD=bbbb
bash
now calling the file using ~/.app-x-setup.sh will give me an interactive shell with those environment values setup
The following were extracted from 2nd paragraph from David W.'s answer: "If one shell spawns a subshell, that subshell has access to the parent's environment variables, but if that subshell creates an environment variable, it's not accessible in the parent shell."
In case a user need to let parent shell access your new environment variables, just issue the following command in parent shell:
source <your_subshell_script>
or using shortcut
. <your_subshell_script>
You got to add the variable in your .profile located in /home/$USER/.profile
Yo can do that with this command:
echo 'TEST="hi"' >> $HOME/.profile
Or by edit the file with emacs, for example.
If you want to set this variable for all users, you got to edit /etc/profile (root)
There is no global environment, really, in UNIX.
Each process has an environment, originally inherited from the parent, but it is local to the process after the initial creation.
You can only modify your own, unless you go digging around in the process using a debugger.
write it to a temporary file, lets say ~/.myglobalvar and read it from anywhere
echo "$myglobal" > ~/.myglobalvar
Environment variables are always "local" to process execution the export command allow to set environment variables for sub processes. You can look at .bashrc to set environment variables at the start of a bash shell. What you are trying to do seems not possible as a process cannot modify (or access ?) to environment variables of another process.
You can update the ~/.bashrc or ~/.bash_profile file which is used to initialize the environment.
Take a look at the loading behavior of your shell (explained in the manpage, usually referring to .XXXshrc or .profile). Some configuration files are loaded at login time of an interactive shell, some are loaded each time you run a shell. Placing your variable in the latter might result in the behavior you want, e.g. always having the variable set using that distinct shell (for example bash).
If you need to dynamically set and reference environment variables in shell scripts, there is a work around. Judge for yourself whether is worth doing, but here it is.
The strategy involves having a 'set' script which dynamically writes a 'load' script, which has code to set and export an environment variable. The 'load' script is then executed periodically by other scripts which need to reference the variable. BTW, the same strategy could be done by writing and reading a file instead of a variable.
Here's a quick example...
Set_Load_PROCESSING_SIGNAL.sh
#!/bin/bash
PROCESSING_SIGNAL_SCRIPT=./Load_PROCESSING_SIGNAL.sh
echo "#!/bin/bash" > $PROCESSING_SIGNAL_SCRIPT
echo "export PROCESSING_SIGNAL=$1" >> $PROCESSING_SIGNAL_SCRIPT
chmod ug+rwx $PROCESSING_SIGNAL_SCRIPT
Load_PROCESSING_SIGNAL.sh (this gets dynamically created when the above is run)
#!/bin/bash
export PROCESSING_SIGNAL=1
You can test this with
Test_PROCESSING_SIGNAL.sh
#!/bin/bash
PROCESSING_SIGNAL_SCRIPT=./Load_PROCESSING_SIGNAL.sh
N=1
LIM=100
while [ $N -le $LIM ]
do
# DO WHATEVER LOOP PROCESSING IS NEEDED
echo "N = $N"
sleep 5
N=$(( $N + 1 ))
# CHECK PROCESSING_SIGNAL
source $PROCESSING_SIGNAL_SCRIPT
if [[ $PROCESSING_SIGNAL -eq 0 ]]; then
# Write log info indicating that the signal to stop processing was detected
# Write out all relevent info
# Send an alert email of this too
# Then exit
echo "Detected PROCESSING_SIGNAL for all stop. Exiting..."
exit 1
fi
done
~/.bin/SOURCED/lazy script to save and load data as flat files for system.
[ ! -d ~/.megadata ] && mkdir ~/.megadata
function save_data {
[ -z "$1" -o -z "$2" ] && echo 'save_data [:id:] [:data:]' && return
local overwrite=${3-false}
[ "$overwrite" = 'true' ] && echo "$2" > ~/.megadata/$1 && return
[ ! -f ~/.megadata/$1 ] && echo "$2" > ~/.megadata/$1 || echo ID TAKEN set third param to true to overwrite
}
save_data computer engine
cat ~/.megadata/computer
save_data computer engine
save_data computer megaengine true
function get_data {
[ -z "$1" -o -f $1 ] && echo 'get_data [:id:]' && return
[ -f ~/.megadata/$1 ] && cat ~/.megadata/$1 || echo ID NOT FOUND
:
}
get_data computer
get_data computer
Maybe a little off topic, but when you really need it to set it temporarily to execute some script and ended up here looking for answers:
If you need to run a script with certain environment variables that you don't need to keep after execution you could do something like this:
#!/usr/bin/env sh
export XDEBUG_SESSION=$(hostname);echo "running with xdebug: $XDEBUG_SESSION";$#
In my example I just use XDEBUG_SESSION with a hostname, but you can use multiple variables. Keep them separated with a semi-colon. Execution as follows (assuming you called the script debug.sh and placed it in the same directory as your php script):
$ debug.sh php yourscript.php

Resources