My python program expects a few environment variables to be populated first (API KEYs etc.)
I have a bash script that fetches the temporary API keys, and sets it using EXPORT, and i'm able to call the script successfully.
If i use pycharm's run option, the environment variables are understandably missing.
Other than setting the Env variables manually in the pycharm configuration , can i ask it to run the bash script and then execute my python file in the same bash context ?
I have a script populate_env available in my PATH.
Related
I have tried everything to execute a manually installed command in a bash script that normally executes fine in my user shell (yigit#instance-1). I'm thinking that GCP Instances can't access proper env variables.
The command that I've installed called Task Spooler and executed as ts in shell. I setup the tar package using Makefile in following paths (by make install):
ts is /usr/local/bin/ts
ts is /usr/bin/ts
ts is /bin/ts
So my shell script is as follows:
#!/bin/bash
echo $PATH
ts python3 somepyscript.py
By looking output of PATH env, it seems there isn't any mismatching of environment variables to access the command. However output comes to me as:
/home/yigit/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
./tm_pipeline.sh: line 10: ts: command not found
As said, the command works fine in my user shell that I connected over SSH. Can't understand why this is happening in VM Instance... I know GCP offers start-up scripts to VMs in settings and regarding containerization applications via Cloud Build etc. Could it be a problem about the interference with these or is there something I can do .. ? Thanks for any help in advance.
I have a bash script where I am trying to use values of some environment variables. Those variables are defined - I see the keys and values if I run printenv.
Also, these variables are defined and exported like
export FOO="bar"
in both ~/.bash_profile and ~/.bashrc.
I am trying to execute the script via ./script-name which fails to get the environment variables. If I run sudo -E ./script-name, that somehow gets the script the variables it needs.
Confused as to why these variables aren't available to the script even when they are exported in above files.
The only thing I can think of, is that for some reason, the shell process which you are calling to run the script, does not have full read access to your current environment.
ls -al /usr/bin/bash
ls -al /bin/sh
Assuming neither of them are symlinks, make sure that your current user has read and execute priveleges. A safer (in security terms) option, would be for you to install bash in ~/opt, and use #!~/opt/bin/bash as your shebang line.
I have a scheduled job on windows to compile nightly build for my program. This has been done using Cygwin64, and my schedule job looks like
C:\Cygwin64\bin\bash.exe -l -c "/cygdrive/d/path/to/buildscript.sh"
Recently, I wanted to replicate this using MSYS2, and my buildscript.sh is working properly in the MSYS64 shell (msys2.ini has MSYS2_PATH_TYPE=inherit enabled). However, when changing the above command to
C:\msys64\usr\bin\bash.exe -l -c "/d/path/to/buildscript.sh"
my script fails. It turns out that the system environment variables are not copied to the bash session.
I would like to know if there is a command line option that I can inherit all system env variables in a bash session. I tried set MSYS2_PATH_TYPE=inherit in a cmd session before calling the above command, but it does not work.
create MSYS2_PATH_TYPE in your system variables,and set the value to 'inherit'.
then your new created bash.exe could inherit your system path.
I am attempting to run a command of the following form:
ssh -t -o StrictHostKeyChecking=no lxplus0035 "source ~/analysis/analysis/util/setup_analysis.sh nonInteractive; time run_analysis configuration1.txt configuration2.txt"
The sourced script setup_analysis.sh sets up a complex environment including environment variables. It itself sources a script to do some of this. The executable run_analysis should be known to the environment after sourcing setup_analysis.sh.
If I do this in the local shell, it works fine and the executable launches without a problem. If I do it via the SSH command shown above, the sourcing appears to work (and I do a which run_analysis within the sourced script, which prints the correct executable and its path), but then after the sourcing, the executable cannot be found and run.
Why is the remote shell behaving differently and how can I get it to retain the environment set up by the sourced script?
I need to set some environment variables in Ubuntu. I do the following and it works:
export PATH="/home/vagrant/ns-allinone-2.35/bin:/home/vagrant/ns-allinone-2.35/tcl8.5.10/unix:/home/vagrant/ns-allinone-2.35/tk8.5.10/unix:$PATH"
export LD_LIBRARY_PATH="/home/vagrant/ns-allinone-2.35/otcl-1.14:/home/vagrant/ns-allinone-2.35/lib"
export TCL_LIBRARY="/home/vagrant/ns-allinone-2.35/tcl8.5.10/library"
But I move the same thing in a script envexport.sh and execute it, the environment variables are not getting set.
Where am I going wrong? How to accomplish this?
Thanks.
If you just run the script, the environment variables get destroyed when the script finishes.
Use . envexport.sh. That way the commands get executed in the current shell (environment).
When you run a command in the shell, the shell creates a subprocess (child process). All the environment variables which were defined or changed down in the subprocess will be lost to the parent process.
However if you source a script, you force the script to run in the current process. That means environment variables in the script you ran will not be lost.
One thing that may help is if you will want those variables set for all of your sessions you can place the same commands in your .bashrc file by running the following command and pasting the lines in the file.
vim ~/.bashrc
and the run
source ~/.bashrc
in any terminals you currently are running. If you start any new terminals they will automatically have your directories added to your path.