How to get path of conda env from its name? - anaconda

When I do conda info --envs, I get list of all envs and their locations like this:
# conda environments:
#
base * /Users/mbp/miniconda3
myenv /Users/mbp/miniconda3/envs/myenv
Is there a way to get the location of myenv environment from command line? May be something like conda info --envs myenv to get
/Users/mbp/miniconda3/envs/myenv
What's the use case?
I want to cache all the environment dependencies in GitHub actions. This has to happen before the environment is activated. If I know the location of the environment, I can cache all the files in it.

conda info --envs | grep -Po 'myenv\K.*' | sed 's: ::g'
This bash command will retrieve all envs from conda and find the line which starts from myenv and give it to the sed command which inturn removes the spaces
surprisingly it worked for me

$(conda info --base)/envs/myenv

Conda's internal function to handle prefix resolution (locate_prefix_by_name) is currently located in conda.base.context, so one could avoid listing all envs with a script like:
conda-locate.py
#!/usr/bin/env conda run -n base python
import sys
from conda.base.context import locate_prefix_by_name
print(locate_prexif_by_name(sys.argv[1]), end='')
Usage
# make executable
chmod +x conda-locate.py
./conda-locate.py jupyter
# /Users/user/miniforge/envs/jupyter
You may want to add a try..catch to handle the conda.exceptions.EnvironmentNameNotFound exception.

Related

conda environment doesn't activate in bash script job on computing cluster [duplicate]

This question already has answers here:
Using conda activate or specifying python path in bash script?
(2 answers)
Closed 6 months ago.
I am trying to submit a SLURM job on a computing cluster CentOS7. The job is a python file (cifar100-vgg16.py) which needs tensorflow-gpu 2.8.1, which I've installed in a conda environment (tf_gpu). The bash script I'm submitting to SLURM (our job scheduler) is copied below. The SLURM output file shows that the environment being used is the base conda environment Python/3.6.4-foss-2018a (with tensorflow 1.10.1), not tf_gpu. Please advise on how to solve.
Bash script:
#!/bin/bash --login
########## SBATCH Lines for Resource Request ##########
#SBATCH --time=00:10:00 # limit of wall clock time - how long the job will run (same as -t)
#SBATCH --nodes=1 # the number of node requested.
#SBATCH --ntasks=1 # the number of task to run
#SBATCH --cpus-per-task=1 # the number of CPUs (or cores) per task (same as -c)
#SBATCH --mem-per-cpu=2G # memory required per allocated CPU (or core) - amount of memory (in bytes)
#SBATCH --job-name test2 # you can give your job a name for easier identification (same as -J)
########## Command Lines to Run ##########
conda activate tf_gpu
python cifar100-vgg16.py
SLURM output file:
> /opt/software/Python/3.6.4-foss-2018a/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/opt/software/Python/3.6.4-foss-2018a/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/opt/software/Python/3.6.4-foss-2018a/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/opt/software/Python/3.6.4-foss-2018a/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/opt/software/Python/3.6.4-foss-2018a/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/opt/software/Python/3.6.4-foss-2018a/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Tensorflow version 1.10.1
Keras version 2.1.6-tf
Scikit learn version 0.20.0
Traceback (most recent call last):
File "cifar100-vgg16.py", line 39, in <module>
print("Number of GPUs Available:", len(tensorflow.config.experimental.list_physical_devices('GPU')))
AttributeError: module 'tensorflow' has no attribute 'config'
There is a mistake in your job script. Replace conda activate tf_gpu with source activate tf_gpu.
Also, I guess you need to load the module so that you can use it. It would be something like module load anaconda check module avail for the list of available modules.
But looks like you HPC doesn't need a module load as it identifies conda without module load.
EDIT: FlytingTeller said that source activate is replaced with conda activate in 2017. I know this.
I don't know if this works on HPCs. To prove my point here is the output of Swansea SUNBIRD, when I use conda activate.
(base) hell#Dell-Precision-T7910:~$ ssh sunbird
Last login: Wed Aug 10 15:30:29 2022 from en003013.swan.ac.uk
====================== Supercomputing Wales - Sunbird ========================
This system is for authorised users, if you do not have authorised access
please disconnect immediately, and contact Technical Support.
-----------------------------------------------------------------------------
For user guides, documentation and technical support:
Web: http://portal.supercomputing.wales
-------------------------- Message of the Day -------------------------------
SUNBIRD has returned to service unchanged. Further information on
the maintenance outage and future work will be distributed soon.
===============================================================================
[s.1915438#sl2 ~]$ module load anaconda/3
[s.1915438#sl2 ~]$ conda activate base
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
If your shell is Bash or a Bourne variant, enable conda for the current user with
$ echo ". /apps/languages/anaconda3/etc/profile.d/conda.sh" >> ~/.bashrc
or, for all users, enable conda with
$ sudo ln -s /apps/languages/anaconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh
The options above will permanently enable the 'conda' command, but they do NOT
put conda's base (root) environment on PATH. To do so, run
$ conda activate
in your terminal, or to put the base environment on PATH permanently, run
$ echo "conda activate" >> ~/.bashrc
Previous to conda 4.4, the recommended way to activate conda was to modify PATH in
your ~/.bashrc file. You should manually remove the line that looks like
export PATH="/apps/languages/anaconda3/bin:$PATH"
^^^ The above line should NO LONGER be in your ~/.bashrc file! ^^^
[s.1915438#sl2 ~]$ source activate base
(base) [s.1915438#sl2 ~]$
Here is the output of Cardiff HAWK when I use conda activate.
(base) hell#Dell-Precision-T7910:~$ ssh cardiff
Last login: Tue Aug 2 09:32:44 2022 from en003013.swan.ac.uk
======================== Supercomputing Wales - Hawk ==========================
This system is for authorised users, if you do not have authorised access
please disconnect immediately, and contact Technical Support.
-----------------------------------------------------------------------------
For user guides, documentation and technical support:
Web: http://portal.supercomputing.wales
-------------------------- Message of the Day -------------------------------
- WGP Gluster mounts are now RO on main login nodes.
- WGP RW access is via Ser Cymru system or dedicated access VM.
===============================================================================
[s.1915438#cl1 ~]$ module load anaconda/
anaconda/2019.03 anaconda/2020.02 anaconda/3
anaconda/2019.07 anaconda/2021.11
[s.1915438#cl1 ~]$ module load anaconda/2021.11
INFO: To setup environment run:
eval "$(/apps/languages/anaconda/2021.11/bin/conda shell.bash hook)"
or just:
source activate
[s.1915438#cl1 ~]$ conda activate
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
[s.1915438#cl1 ~]$ conda activate base
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
[s.1915438#cl1 ~]$ source activate base
(2021.11)[s.1915438#cl1 ~]$
The conda versions are certainly after 2020 not 2017. SInce, the question was about a HPC cluster, that I why said replace conda activate with source activate, to activate the first conda environment.
Anyone with a possible explanation?
EDIT 2: I think I have an explanation.
[s.1915438#sl2 ~]$ cat ~/.bashrc
# .bashrc
# Dynamically generated by: genconfig (Do not edit!)
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Load saved modules
module load null
# Personal settings file
if [ -f $HOME/.myenv ]
then
source $HOME/.myenv
fi
The ~/.bashrc does not contain the path to conda.sh. I think this is true for many HPCs.

Conda Environment doesn't stay activated throughout the bash script

I'm making an installer using the below bash script. After activating the env and later on checking the active env using the conda info, it shows that No env is active. Please refer to the image
Even after installing dependencies, the packages are not installed in the env, instead it's getting installed in the base env.
The script-
#!/usr/bin/bash
set -euo pipefail
conda create --name count python=3.7 <<< 'y'
. $(conda info --base)/etc/profile.d/conda.sh
conda activate count
conda info
#install dependencies
pip install -r requirements_count.txt
Thanks in advance for going through the query!
First, iteratively using ad hoc installs is a bad pattern. Better practice is to predefine everything in the environment with a YAML. That can even include PyPI packages and requirements.txt files.
count.yaml
name: count
channels:
- conda-forge
dependencies:
## Conda deps
- python=3.9 ## change to what you want, but preferable to specify
## PyPI deps
- pip
- pip:
- -r requirements_count.txt
Second, conda activate is intended for interactive shells, not scripts. One can run a bash script interactively by including an -l flag in the shebang. But better practice is to use the conda run method, which is designed for programmatic execution of commands in a specified context.
Pulling this together, the script might be:
script.sh
#!/usr/bin/bash
set -euo pipefail
## create the environment
conda create -n count -f count.yaml
## execute a Python script in the environment
conda run -n count python my_code.py
If the code run in the environment needs to interact with a user or stream output, you may need additional flags (--no-capture-output, --live-stream). See conda run --help.

Calling An Anaconda Environment from MATLAB: Conda Command Not Found

I want to call a Python script I created in its own Anaconda environment and wanted to call the script from Matlab 2020a. However, when I try to activate the environment from Matlab, I get an error message:
system('conda activate *name_of_environment*')
/bin/bash: conda: command not found
I installed the newest version of anaconda3 (2020.02) on a Ubuntu 18.04 machine and, as recommended, didn't add conda to bashrc but added the conda.sh directory instead as recommended here:
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/michael/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/home/michael/anaconda3/etc/profile.d/conda.sh" ]; then
. "/home/michael/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/home/michael/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
# export PATH="/home/michael/anaconda3/bin:$PATH" # commented out by conda initialize
#Enable conda to be called from bash
source /home/michael/anaconda3/etc/profile.d
However, I can't find an explanation how to run conda from Matlab otherwise. Am I missing something?
Thanks a bunch, and best,
Michael
Let me elaborate my comment it in an answer.
Binaries are found trough the PATH environment variables. The location of conda is not in that variable. Therefore you should either add it to your PATH variables (or un-comment it that script at your notification).
Example:
$ export PATH="$PATH:/home/michael/anaconda3/bin/"
$ ./yourscript.sh
But it also can be that the PATH variable isn't copied through system(), which I guess executes the script in a new shell. In this case, you should execute it as:
system('/home/michael/anaconda3/bin/conda activate *name_of_environment*')
I know it is too late, but maybe the best way to run a python script using a conda environment is to call the python executable associated with that environment directly:
system('~/anaconda3/envs/<name_of_environment>/bin/python your_script.py')

conda activate on Travis CI

I am using conda 4.6.8 to test a python package in a conda env on Travis CI. I want to replace my old source activate ENVNAME line with the new conda activate ENVNAME command in my Travis CI configuration. If I run this on Travis:
>>> conda update -n base conda
>>> conda init
no change /home/travis/miniconda/condabin/conda
no change /home/travis/miniconda/bin/conda
no change /home/travis/miniconda/bin/conda-env
no change /home/travis/miniconda/bin/activate
no change /home/travis/miniconda/bin/deactivate
no change /home/travis/miniconda/etc/profile.d/conda.sh
no change /home/travis/miniconda/etc/fish/conf.d/conda.fish
no change /home/travis/miniconda/shell/condabin/Conda.psm1
no change /home/travis/miniconda/shell/condabin/conda-hook.ps1
no change /home/travis/miniconda/lib/python3.7/site-packages/xonsh/conda.xsh
no change /home/travis/miniconda/etc/profile.d/conda.csh
modified /home/travis/.bashrc
==> For changes to take effect, close and re-open your current shell. <==
How can I "close and re-open" my shell on Travis? Because otherwise I cannot activate my conda environment:
>>> conda create -n TEST package_names
>>> conda activate TEST
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
The command "conda activate TEST" failed and exited with 1 during .
Your build has been stopped.
Not sure it is currently supported as the official doc still uses source in travis.yml.
What does conda init do?
This new command should harmonize the way users setup their shells to be able to call conda activate.
Actually, if you run conda init --dry-run --verbose you will see that it tries to source conda.sh from your ~/.bashrc (assuming you're running Bash, from info mentioned in your question).
And conda.sh will define a conda() function that will catch a few commands among which activate and deactivate and dispatch to $CONDA_EXE:
conda() {
if [ "$#" -lt 1 ]; then
"$CONDA_EXE"
else
\local cmd="$1"
shift
case "$cmd" in
activate|deactivate)
__conda_activate "$cmd" "$#"
;;
install|update|upgrade|remove|uninstall)
"$CONDA_EXE" "$cmd" "$#" && __conda_reactivate
;;
*) "$CONDA_EXE" "$cmd" "$#" ;;
esac
fi
}
So unless this function is defined in your local shell, you won't be able to call conda activate.
Hint on a solution? (not tested for Travis CI)
The only hint I can suggest is to try source $(conda info --root)/etc/profile.d/conda.sh and then conda activate. This should do roughly the same as conda init assuming you are using Bourne shell derivatives.
For csh there is $(conda info --root)/etc/profile.d/conda.csh, and for fish there is $(conda info --root)/etc/fish/conf.d/conda.fish
Note: although not tested for Travis CI, this solution works for me from bash. Of course, the conda executable should be found in PATH for conda info --root to work properly.

Cannot run source activate with conda in Fish-shell

I follow conda_PR_545, conda issues 4221 and still not working on Ubuntu.
After downloading conda.fish from here, and mv it to anaconda3/bin/.
Add "source /home/phejimlin/anaconda3/bin/conda.fish" at the end of ~/.config/fish/config.fish.
conda activate spark_env
Traceback (most recent call last):
File "/home/phejimlin/anaconda3/bin/conda", line 6, in
sys.exit(conda.cli.main())
File "/home/phejimlin/anaconda3/lib/python3.6/site-packages/conda/cli/main.py", line 161, in main
raise CommandNotFoundError(argv1, message)
TypeError: init() takes 2 positional arguments but 3 were given
or
activate spark_env
Error: activate must be sourced. Run 'source activate envname'
instead of 'activate envname'.
Do I miss something?
As of fish 2.6.0 conda 4.3.27: the following steps may change as issue is addressed
update config
Take note of your conda's location
conda info --root
/Users/mstreeter/anaconda # this is my <PATH_TO_ROOT>
Add line to ~/.config/fish/config.fish
source <PATH_TO_ROOT>/etc/fish/conf.d/conda.fish
update convention
Typically you'd run the following from bash
source activate <environment>
source deactivate <environment>
Now you must run the following from fish
conda activate <environment>
conda deactivate <environment>
issues
so after doing this I'm not able to set fish as my default shell and have it still work properly with conda. Currently, I must first enter my default shell, and enter fish and the shell works as expected. I'll update this after I find out how to get it working completely without the need to explicitly choose fish each time I log into my terminal
If you follow https://github.com/conda/conda/issues/2611, the steps are (from start):
[root#6903a8d80f9b ~]# fish
root#6903a8d80f9b ~# echo $FISH_VERSION
2.4.0
root#6903a8d80f9b ~# bash Miniconda2-4.3.11-Linux-x86_64.sh -b -p /conda
root#6903a8d80f9b ~# source /conda/etc/fish/conf.d/conda.fish
root#6903a8d80f9b ~# conda activate root
root#6903a8d80f9b ~# conda create -yn fishtest (root)
Fetching package metadata .........
Solving package specifications:
Package plan for installation in environment /conda/envs/fishtest:
#
# To activate this environment, use:
# > source activate fishtest
#
# To deactivate this environment, use:
# > source deactivate fishtest
#
root#6903a8d80f9b ~# conda activate fishtest (root)
root#6903a8d80f9b ~# (fishtest)
root#6903a8d80f9b ~# conda deactivate fishtest (fishtest)
Adding conda's bin directory to PATH isn't recommended as of conda 4.4.0
https://github.com/conda/conda/blob/master/CHANGELOG.md#440-2017-12-20
All you need to do is adding
source <path-to-anaconda>/etc/fish/conf.d/conda.fish
to config.fish.

Resources