I am hoping to run a simple shell script to ease the management around some conda environments. Activating conda environments via conda activate in a linux os works fine in the shell but is problematic within a shell script. Could someone point me into the right direction as to why this is happening?
Example to repeat the issue:
# default conda env
$ conda info | egrep "conda version|active environment"
active environment : base
conda version : 4.6.9
# activate new env to prove that it works
$ conda activate scratch
$ conda info | egrep "conda version|active environment"
active environment : scratch
conda version : 4.6.9
# revert back to my original conda env
$ conda activate base
$ cat shell_script.sh
#!/bin/bash
conda activate scratch
# run shell script - this will produce an error even though it succeeded above
$ ./shell_script.sh
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
I use 'source command' to run the shell script, it works:
source shell_script.sh
The error message is rather helpful - it's telling you that conda is not properly set up from within the subshell that your script is running in. To be able to use conda within a script, you will need to (as the error message says) run conda init bash (or whatever your shell is) first. The behaviour of conda and how it's set up depends on your conda version, but the reason for the version 4.4+ behaviour is that conda is dependent on certain environment variables that are normally set up by the conda shell itself. Most importantly, this changelog entry explains why your conda activate and deactivate commands no longer behave as you expect in versions 4.4 and above.
For more discussion of this, see the official conda issue on GitHub.
Edit: Some more research tells me that the conda init function mentioned in the error message is actually a new v4.6.0 feature that allows a quick environment setup so that you can use conda activate instead of the old source activate. However, the reason why this works is that it adds/changes several environment variables of your current shell and also makes changes to your RC file (e.g.: .bashrc), and RC file changes are never picked up in the current shell - only in newly created shells. (Unless of course you source .bashrc again). In fact, conda init --help says as much:
IMPORTANT: After running conda init, most shells will need to be closed and restarted for changes to take effect
However, you've clearly already run conda init, because you are able to use conda activate interactively. In fact, if you open up your .bashrc, you should be able to see a few lines added by conda teaching your shell where to look for conda commands. The problem with your script, though, lies in the fact that the .bashrc is not sourced by the subshell that runs shell scripts (see this answer for more info). This means that even though your non-login interactive shell sees the conda commands, your non-interactive script subshells won't - no matter how many times you call conda init.
This leads to a conjecture (I don't have conda on Linux myself, so I can't test it) that by running your script like so:
bash -i shell_script.sh
you should see conda activate work correctly. Why? -i is a bash flag that tells the shell you're starting to run in interactive mode, which means it will automatically source your .bashrc. This should be enough to enable you to use conda within your script as if you were using it normally.
Quick solution for bash: prepend the following init script into your Bash scripts.
eval "$(command conda 'shell.bash' 'hook' 2> /dev/null)"
Done.
For other shells, check the init conf of your shell, copy the following content within the shell conf and prepend it into your scripts.
# >>> conda initialize >>>
...
# <<< conda initialize <<<
You can also use
conda init --all --dry-run --verbose
to get the init script you need in your scripts.
Explanation
This is related with the introduction of conda init in conda 4.6.
Quote from conda 4.6 release log
Conda 4.4 allowed “conda activate envname”. The problem was that setting up your shell to use this new feature was not always straightforward. Conda 4.6 adds extensive initialization support so that more shells than ever before can use the new “conda activate” command. For more information, read the output from “conda init –help”
After conda init is introduced in conda 4.6, conda only expose command
conda into the PATH but not all the binaries from "base". And environment switch is unified by conda activate env-name and conda deactivate on all platforms.
But to make these new commands work, you have to do an additional initialization with conda init.
The problem is that your script file is run in a sub-shell, and conda is not initialized in this sub-shell.
References
Conda 4.6 Release
Unix shell initialization
Shell startup scripts
Using conda activate or source activate in shell scripts does not always work and can throw errors like this. An easy work around it to place source ~/miniconda3/etc/profile.d/conda.sh above any conda activate command in the script:
source ~/miniconda3/etc/profile.d/conda.sh # Or path to where your conda is
conda activate some-conda-environment
This is the solution that has worked for me and will also work if sharing scripts. This also gets around having to use conda init as on some clusters I have worked with the system is initialised but conda activate still won't work in a shell script.
if you want to use the shell script to run the other python file in the other conda env, just run the other file via the following command.
os.system('conda run -n <env_name> python <path_to_other_script>')
What is the problem with simply doing something like this in your shell:
source /opt/conda/etc/profile.d/conda.sh
(The conda init is still marked as Experimental, and thus not sure if it is a good idea to use it yet).
I also had the exact same error when trying to activate conda env from C++ or Python file. I solved it by bypassing the conda activate statement and using the absolute path of the specific conda env.
For me, I set up an environment called "testenv" using conda.
I searched all python environments using
whereis python | grep 'miniconda'
It returned a list of python environments. Then I ran my_python_file.py using the following command.
~/miniconda3/envs/testenv/bin/python3.8 my_python_file.py
You can do the same thing on windows too but looking up for python and conda python environments is a bit different.
This answer from Github worked for me (I'm using Ubuntu so it's not for Windows only):
eval "$(conda shell.bash hook)"
conda activate my_env
I just followed a similar solution like the one from hong-xu
So to run a shell command that calls the script with arguments and using a specific conda environment:
from a jupyter cell, goes like this :
p1 = <some-value>
run = f"conda run -n {<env-name>} python {<script-name>.py} \
--parameter_1={p1}"
!{run}
I didn't find any of the above scripts useful. These are fine if you want to run conda in non-interactive mode, but I'd like to run it in interactive mode. If I run:
conda activate my_environment
in a bash script it just runs in the script.
I found that creating an alias in .bashrc is all that is required to change directory to a particular project I'm working on, and set up the correct conda environment for me. So I included in .bashrc:
alias my_environment="cd ~/subdirectory/my_project && conda activate my_environment"
and then:
source ~/.bashrc
Then I can just type at the command line:
my_environment
to change to the correct project and correct environment everytime I want to work on a different project.
This answer is similar to #Lamma answer. This worked for me ->
(1) I defined several variables; the conda activate function, environments directory and environment
conda_activate=~/anaconda3/bin/activate
conda_envs_dir=~/anaconda3/envs
conda_env=<env name>
(2) I source conda activate with the environment
source ${conda_activate} ${conda_envs_dir}/${conda_env}
(3) then you can run your python script
python <path to script.py>
This bypasses the conda init requirement. my .bashrc already was initialized and sourcing the .bashrc file didn't work for me. #Lamma's answer worked for me as well as the above code.
The problem is that when you run the bash script, a new (linux) shell environment is created that was not initialized properly. If your intention is to activate a conda environment, and then run python through the script, you can properly initialize the created shell
environment as discussed in the accepted solution.
If however you want to have the conda environment active after you finish this script, then this will not work because the conda environment has changed on the new shell and you exit that shell when you finish the script. Think of this as running bash, then conda activate... then running exit to exit that bash... More details in How to execute script in the current shell on Linux?:
TL;DR:
Add the line #!/bin/bash as the first line of the script
Type the command source shell_script.sh or . shell_script.sh
Note: . in bash is equivalent to source in bash.
$ conda activate scratch
or
$ source activate scratch
#open terminal or CMD as administrator
$ cd <path Anaconda3 install>\Scripts
$ activate
$ cd ..
$ conda activate scratch
I have a simple C++ program that I'm trying to run that is linked against a version of the Boost.Thread library that I built previously. I seem to be having trouble understanding the way that runtime library paths behave on OS X.
Since my Boost library doesn't have an RPATH-relative install name, I'm using the DYLD_LIBRARY_PATH environment variable to tell the dynamic linker where to find libboost_thread.dylib at runtime.
This works fine if I run the program directly in my (bash) shell:
[~/git/project]$ echo $DYLD_LIBRARY_PATH
/Users/jasonr/git/project/boost/lib
[~/git/project]$ .sconf_temp/conftest_7
[~/git/project]$ # Program runs successfully; this is what I expect.
However, this program is run as part of a series of tests by an autoconf-like framework that I'm using. It runs the program in a child shell using sh -c. Here's what happens if I try that:
[~/git/project]$ # Make sure the environment variable is exported to child shells.
[~/git/project]$ export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH
[~/git/project]$ # Try to run it in a child shell.
[~/git/project]$ sh -c .sconf_temp/conftest_7
dyld: Library not loaded: libboost_thread.dylib
Referenced from: /Users/jasonr/git/project/.sconf_temp/conftest_7
Reason: image not found
Trace/BPT trap: 5
It's as if the environment variable isn't propagated to dyld in this case. Why would this occur? I'm more familiar with the behavior of LD_LIBRARY_PATH on Linux, which (I think) should work with the above example. Is there something else I need to do in order to make this work?
Presumably, you are running El Capitan (OS X 10.11) or later. It's a side effect of System Integrity Protection. From the System Integrity Protection Guide: Runtime Protections article:
When a process is started, the kernel checks to see whether the main
executable is protected on disk or is signed with an special system
entitlement. If either is true, then a flag is set to denote that it
is protected against modification. …
… Any dynamic linker (dyld)
environment variables, such as DYLD_LIBRARY_PATH, are purged when
launching protected processes.
All of the system-provided interpreters, including /bin/sh, are protected in this fashion. Therefore, when you invoke sh, all DYLD_* environment variables are purged.
You could write a shell script which sets DYLD_LIBRARY_PATH and then executes .sconf_temp/conftest_7. You can use the the shell interpreter to execute that — indeed, you must — and the environment variable will be fine, since the purging happens when a protected executable is started. Basically, this approach is analogous to the working example in your question, but encapsulated in a shell script.
Often when using conda environments, the environment variables set in my rc file are overwritten (experienced with both bash as zsh), particularly my $PATH variable. It does not always happen though; I just experienced this, then tried to reproduce it but the second time it did not happen.
After activating the conda environment (source activate py35) I can resource my rc file (source ~/.zshrc) to recover my environment variables but can I avoid this extra step?
What does 'export' do when used in a command line.
For example, and this is only one example, I build a number of C++ libraries and for a library such as zlib-1.2.8 I need to specify the install directories.
To do this I need to do the following in MSYS command line interface. This is just one example
export LIBRARY_PATH="c/libraries/libs;$LIBRARY_PATH"
Would anyone know what the command 'export' actually does in this instance?
Does it permanently install a record for MSYS to user later on when looking for dependencies such as ZLIB . My using make install the zlib library file is placed in this directory.
OR, when I close MSYS is this LIBRARY_PATH lost from MSYS memory?
Thanks
This is the bash syntax to set an environment variable. Using export allows the variable to be seen outside the script in which it's defined.
Environment variables only affect the msys process and any child processes started from that shell. If you want it to persist after you close the command line and start a new one, you will need to put it into a script such as .bashrc
I have a shell script that exports a couple of environment variables that are needed for building a software project (Android Keystore Location).
Usually when I call the script the environment variables are exported, and the IDE can access them, so does the export command on the Bash Terminal.
Since I installed Mac OS X El Capitan, the environment variable set by the Bash command
export FOO="bar"
are not returned when I try to access them by
echo $FOO
on the shell. Instead I only get a empty line returned.
If I use printenv from within the shell script $F00 is displayed.
When I call printenv from the Cash terminal $FOO is missing.
I read that the OS X "El Capitan" updates fixes some security issues concerning bash. Could that be the cause ?
You cannot modify or export variables in the shell with a shebang script (a script whose first line begins with #! and the pathname of an executable), if that's what you're doing, because that creates a separate process to execute the script and any variables it exports are only visible to its child processes.
You can only arrange for a script to modify variables in the current shell by executing the script within the current process with . (or source) for example.
For example, the ~/.bashrc, ~/.bash_profile and ~/.bash_logout scripts are executed directly by the shell, so they can set or export variables to be inherited by commands and sub-shells run from the shell.
I am having a similar issue. It seems that environment variables are not being propagated to child processes from bash. I solved it by explicitly adding the variables I need to the child.
export MYVAR=foo
MYVAR=$MYVAR ./executable_to_launch
I would be interested to see if someone has found a better solution.