How can I activate Conda environment using python code? - anaconda

How can I activate a Conda environment using python code?
Here are things I tried so far but they don't seem to change my environment:
import os
os.system('conda activate envname')
# Doesn't work but returns no errors.
import os
stream = os.popen('conda activate envname')
output = stream.read()
# Doesn't work but returns no errors.
import subprocess
process = subprocess.Popen(['conda', 'activate', 'envname'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
stdout, stderr = process.communicate()
# Doesn't work but returns no errors.
So how can I change my Conda environment using python, and I want it to work on all platforms (linux, mac, windows)?
EDIT 1:
So from this question it seems that all I'm doing is changing the environment temporarily during the existence of the subprocess. I want a way to change it in my current running shell...

Related

Need to understand and fix conda in linx for shell scirpt [duplicate]

I am hoping to run a simple shell script to ease the management around some conda environments. Activating conda environments via conda activate in a linux os works fine in the shell but is problematic within a shell script. Could someone point me into the right direction as to why this is happening?
Example to repeat the issue:
# default conda env
$ conda info | egrep "conda version|active environment"
active environment : base
conda version : 4.6.9
# activate new env to prove that it works
$ conda activate scratch
$ conda info | egrep "conda version|active environment"
active environment : scratch
conda version : 4.6.9
# revert back to my original conda env
$ conda activate base
$ cat shell_script.sh
#!/bin/bash
conda activate scratch
# run shell script - this will produce an error even though it succeeded above
$ ./shell_script.sh
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
I use 'source command' to run the shell script, it works:
source shell_script.sh
The error message is rather helpful - it's telling you that conda is not properly set up from within the subshell that your script is running in. To be able to use conda within a script, you will need to (as the error message says) run conda init bash (or whatever your shell is) first. The behaviour of conda and how it's set up depends on your conda version, but the reason for the version 4.4+ behaviour is that conda is dependent on certain environment variables that are normally set up by the conda shell itself. Most importantly, this changelog entry explains why your conda activate and deactivate commands no longer behave as you expect in versions 4.4 and above.
For more discussion of this, see the official conda issue on GitHub.
Edit: Some more research tells me that the conda init function mentioned in the error message is actually a new v4.6.0 feature that allows a quick environment setup so that you can use conda activate instead of the old source activate. However, the reason why this works is that it adds/changes several environment variables of your current shell and also makes changes to your RC file (e.g.: .bashrc), and RC file changes are never picked up in the current shell - only in newly created shells. (Unless of course you source .bashrc again). In fact, conda init --help says as much:
IMPORTANT: After running conda init, most shells will need to be closed and restarted for changes to take effect
However, you've clearly already run conda init, because you are able to use conda activate interactively. In fact, if you open up your .bashrc, you should be able to see a few lines added by conda teaching your shell where to look for conda commands. The problem with your script, though, lies in the fact that the .bashrc is not sourced by the subshell that runs shell scripts (see this answer for more info). This means that even though your non-login interactive shell sees the conda commands, your non-interactive script subshells won't - no matter how many times you call conda init.
This leads to a conjecture (I don't have conda on Linux myself, so I can't test it) that by running your script like so:
bash -i shell_script.sh
you should see conda activate work correctly. Why? -i is a bash flag that tells the shell you're starting to run in interactive mode, which means it will automatically source your .bashrc. This should be enough to enable you to use conda within your script as if you were using it normally.
Quick solution for bash: prepend the following init script into your Bash scripts.
eval "$(command conda 'shell.bash' 'hook' 2> /dev/null)"
Done.
For other shells, check the init conf of your shell, copy the following content within the shell conf and prepend it into your scripts.
# >>> conda initialize >>>
...
# <<< conda initialize <<<
You can also use
conda init --all --dry-run --verbose
to get the init script you need in your scripts.
Explanation
This is related with the introduction of conda init in conda 4.6.
Quote from conda 4.6 release log
Conda 4.4 allowed “conda activate envname”. The problem was that setting up your shell to use this new feature was not always straightforward. Conda 4.6 adds extensive initialization support so that more shells than ever before can use the new “conda activate” command. For more information, read the output from “conda init –help”
After conda init is introduced in conda 4.6, conda only expose command
conda into the PATH but not all the binaries from "base". And environment switch is unified by conda activate env-name and conda deactivate on all platforms.
But to make these new commands work, you have to do an additional initialization with conda init.
The problem is that your script file is run in a sub-shell, and conda is not initialized in this sub-shell.
References
Conda 4.6 Release
Unix shell initialization
Shell startup scripts
Using conda activate or source activate in shell scripts does not always work and can throw errors like this. An easy work around it to place source ~/miniconda3/etc/profile.d/conda.sh above any conda activate command in the script:
source ~/miniconda3/etc/profile.d/conda.sh # Or path to where your conda is
conda activate some-conda-environment
This is the solution that has worked for me and will also work if sharing scripts. This also gets around having to use conda init as on some clusters I have worked with the system is initialised but conda activate still won't work in a shell script.
if you want to use the shell script to run the other python file in the other conda env, just run the other file via the following command.
os.system('conda run -n <env_name> python <path_to_other_script>')
What is the problem with simply doing something like this in your shell:
source /opt/conda/etc/profile.d/conda.sh
(The conda init is still marked as Experimental, and thus not sure if it is a good idea to use it yet).
I also had the exact same error when trying to activate conda env from C++ or Python file. I solved it by bypassing the conda activate statement and using the absolute path of the specific conda env.
For me, I set up an environment called "testenv" using conda.
I searched all python environments using
whereis python | grep 'miniconda'
It returned a list of python environments. Then I ran my_python_file.py using the following command.
~/miniconda3/envs/testenv/bin/python3.8 my_python_file.py
You can do the same thing on windows too but looking up for python and conda python environments is a bit different.
This answer from Github worked for me (I'm using Ubuntu so it's not for Windows only):
eval "$(conda shell.bash hook)"
conda activate my_env
I just followed a similar solution like the one from hong-xu
So to run a shell command that calls the script with arguments and using a specific conda environment:
from a jupyter cell, goes like this :
p1 = <some-value>
run = f"conda run -n {<env-name>} python {<script-name>.py} \
--parameter_1={p1}"
!{run}
I didn't find any of the above scripts useful. These are fine if you want to run conda in non-interactive mode, but I'd like to run it in interactive mode. If I run:
conda activate my_environment
in a bash script it just runs in the script.
I found that creating an alias in .bashrc is all that is required to change directory to a particular project I'm working on, and set up the correct conda environment for me. So I included in .bashrc:
alias my_environment="cd ~/subdirectory/my_project && conda activate my_environment"
and then:
source ~/.bashrc
Then I can just type at the command line:
my_environment
to change to the correct project and correct environment everytime I want to work on a different project.
This answer is similar to #Lamma answer. This worked for me ->
(1) I defined several variables; the conda activate function, environments directory and environment
conda_activate=~/anaconda3/bin/activate
conda_envs_dir=~/anaconda3/envs
conda_env=<env name>
(2) I source conda activate with the environment
source ${conda_activate} ${conda_envs_dir}/${conda_env}
(3) then you can run your python script
python <path to script.py>
This bypasses the conda init requirement. my .bashrc already was initialized and sourcing the .bashrc file didn't work for me. #Lamma's answer worked for me as well as the above code.
The problem is that when you run the bash script, a new (linux) shell environment is created that was not initialized properly. If your intention is to activate a conda environment, and then run python through the script, you can properly initialize the created shell
environment as discussed in the accepted solution.
If however you want to have the conda environment active after you finish this script, then this will not work because the conda environment has changed on the new shell and you exit that shell when you finish the script. Think of this as running bash, then conda activate... then running exit to exit that bash... More details in How to execute script in the current shell on Linux?:
TL;DR:
Add the line #!/bin/bash as the first line of the script
Type the command source shell_script.sh or . shell_script.sh
Note: . in bash is equivalent to source in bash.
$ conda activate scratch
or
$ source activate scratch
#open terminal or CMD as administrator
$ cd <path Anaconda3 install>\Scripts
$ activate
$ cd ..
$ conda activate scratch

Does Anaconda check bashrc file for CC??? path?

I think I have seen others who have had this same issue, but perhaps it is due to where the Anaconda image is being run on the system ( i.e where the venv is that pip has installed)
Just a note that my processor- arch is x86, and with some older style bus and memory layout.
What's more likely is that CC environment variable is affecting pip/make/autotools.
You can view your environment variable's value by running:
env |grep ^CC=
The ~/.bashrc file itself isn't read unless an interactive shell is launched, so for those shell scripts and/or Make calls that pip uses to compile Python modules, it's not likely to be sourced, but any process spawned from a shell where the CC environment variable is set will inherit that environment variable, regardless of whether it reads ~/.bashrc.

Sourcing bash file inside PyQt4 GUI

I'm working on a GUI built with PyQt4 and Python 2.7 that launches various demos for a Clearpath Husky running ROS Indigo. The GUI switches between launching navigation demos and visualizing my physical robot. To do this, it has to switch between launching demos on a local ROS and the ROS on my Husky. When switching between the two different ROS instances I need to be able to "source" the devel/setup.bash for each OS so that the packages are built correctly and visualizing the Husky inside Rviz doesn't break (Errors with TF frames like "No tf data. Actual error: Fixed Frame [odom] does not exist" and with RobotModel "URDF Model failed to parse"). In my .bashrc, if I source the Husky's setup.bash, the visualization works fine until I try running a local demo. This also happens vice versa; while sourcing the local setup.bash will run the local demos just fine, the Husky's visualization breaks.
Is there a way to use python's subprocess (or another alternative) to source the appropriate devel/setup.bash within the GUI's instance so that the visualization won't break?
Yes, it should be sufficient to source the setup script just before executing the ROS command you want, for instance:
#!/usr/bin/env python
from subprocess import Popen, PIPE
shell_setup_script = "/some/path/to/workspace/devel/setup.sh"
command = "echo $ROS_PACKAGE_PATH"
cmd = ". %s; %s" % (shell_setup_script, command)
output = Popen(cmd, stdout=PIPE, shell=True).communicate()[0]
print(output)
As you see, shell=True is used, which will execute the cmd in a subshell. Then, cmd contains . <shell_setup_script>; <command> which sources the setup script before executing the command. Please note that the .sh file is sourced instead of .bash since generic POSIX shell is likely to be used by Popen.

problems with system shell access from ipython

Seeing some weird behavior in iPython that appears to depend on how it's launched. When launched in terminal, certain shell commands work (pwd), but both 'ls' and '!ls' error out - see below for a § of the OSError Traceback.
When launched with the Qtconsole: "ipython qtconsole --pylab=inline", all appears to be well.
[Additional information: the shell commands work fine when ipython is launched as a notebook.]
[Additional information #2: "iptest core" generates 4 errors - all of which are related to File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py"]
Any suggestions? Thanks! Running Python 2.7.5, IPython 2.1.0, Mac OSX Mavericks.
...
/Library/Python/2.7/site-packages/IPython/core/interactiveshell.pyc in system_raw(self, cmd)
2277 cmd = py3compat.unicode_to_str(cmd)
2278 # Call the cmd using the OS shell, instead of the default /bin/sh, if set.
-> 2279 ec = subprocess.call(cmd, shell=True, executable=os.environ.get('SHELL', None))
2280 # exit code is positive for program failure, or negative for
2281 # terminating signal number.
OSError: [Errno 2] No such file or directory
I can't make sense of all of your symptoms, but the error [Errno 2] message suggests that your SHELL environment variable contains a value that doesn't point to an existing shell executable.
Normally, SHELL contains the full path to your default shell's executable, e.g., /bin/bash.
Things you can try:
Check the output of echo $SHELL and make sure it contains the full path of your default shell's executable.
Invoke ipython with an explicit SHELL value and see if the problem goes away: SHELL=/bin/bash ipython.
pwd doesn't actually invoke a shell instance, so that's why you don't see an error.
By contrast, ls (indirectly) and !ls (directly) do.
I'm running the same versions, and iptest core fails on my machine too, although differently (ImportError: No module named nose.plugins.builtin) - however, at least in my case it seems unrelated to running shells from ipython, which works fine.

Enthought Canopy - passing sys.argv from PySide Qt program

I've recently been looking at the Enthought distro of iPython. Today I decided to see if I could get some Qt GUI progs running and was successful after making minor changes. Simple example:
import sys
from PySide import QtGui # was 'from PyQT4 import QtGui'
# app = QtGui.QApplication(sys.argv) -- not needed
win = QtGui.QWidget()
win.resize(320, 240)
win.setWindowTitle("Hello MIT 6X!")
win.show()
sys.exit() # was 'sys.exit(app.exec_())'
But I would like to be able to pass sys.argv in some cases. Most example code I see is in the form of the commented out 'app = ' line above. If I include it, I get
'RuntimeError: A QApplication instance already exists.'
Suggestions for passing arguments appreciated.
Two separate issues:
1) Passing command line arguments: As you have probably noticed, when you do the "Run" command from the Canopy editor, all it does is issue the IPython %run magic command. You can type the same command in the IPython shell, plus command line parameters, which your program will see. Or to save keystrokes, do this auto-generated Run command once, then press Up Arrow in the IPython shell to recall that auto-generated %run command, then enter your parameters after the filename, and then press Enter. You'll end up with an IPython magic command like this:
%run pathtoprog/myprogrampy p1 p2 p3
We (Enthought) are considering adding a setting for command-line parameters so that you could do "Run with parameters" and have the best of both worlds.
2) Existing QApplication: By default, Canopy's IPython is running in IPython's interactive Pylab mode, with a Qt backend. If you don't want this, you can just disable Pylab mode in the Canopy Preferences/Python menu, or change the Pylab mode to Inline (for matplotlib) instead of Interactive.
For maximum flexibility, with a bit more work, you could (as matplotlib does) introduce logic which checks whether a QApplication already exists, use it if it exists and create it if it does not.

Resources