I use conda 4.7.11 with auto_activate_base: false in ~/.condarc. I installed htop using conda install -c conda-forge htop. It was installed at ~/miniconda3/bin/htop. When I am in base environment I am able to use htop because ~/miniconda3/bin is prepended to PATH variable. But when I am outside all environments then only ~/miniconda3/condabin is prepended to PATH. When I am in all other environments except base then ~/miniconda3/envs/CUSTOM_ENV/bin and ~/miniconda3/condabin are prepended to PATH but not ~/miniconda3/bin, that's why I can use htop only from base environment. So my question is about how to be able to use htop installed using conda from all environments, including case when all environments are deactivated.
Please, don't suggest using package managers like apt or yum in my case (CentOS), because I have no root access to use this package manager. Thank you in advance.
Conda environments aren't nested, so what is in base is not inherited by the others. Isolation of environments is the imperative requirement, so it should make sense that the content in base env is not accessible when it isn't activated.
Option 1: Environment Stacking
However, there is an option to explicitly stack environments, which at this point literally means what you're asking for, namely, keeping the previous enviroment's bin/ in the PATH variable. So, if you htop installed only in base, then you can retain access to it in other envs like so
conda activate base
conda activate --stack my_env
If you decide to go this route, I think it would be prudent to be very minimal about what you install in base. Of course, you could also create a non-base env to stack on, but then it might be a bother to have to always activate this env, whereas in default installs, base auto-activates.
Starting with Conda v4.8 there will be an auto_stack configuration option:
conda config --set auto_stack true
See the documentation on environment stacking for details.
Option 2: Install by Default
If you want to have htop in every env but not outside of Conda envs, then the naive solution is to install it in every env. Conda offers a simple solution to this called Default Packages, and is in the Conda config under the key create_default_packages. Running the following will tell Conda to always install htop when creating a new env:
conda config --add create_default_packages htop
Unfortunately that won't update any existing envs, so you'd still have to go back and do that (e.g., Install a package into all envs). There's also a --no-default-packages flag for ignoring default packages when creating new envs.
Option 3: Global Installs
A Word of Caution
The following two options are not official recommendations, so caveat emptor and, if you do ever use them, be sure to report such a non-standard manipulation of $PATH when reporting problems/troubleshooting in the future.
Linking
Another option (although more manual) is to create a folder in your user directory (e.g., ~/.local/bin) that you add to $PATH in your .bashrc and create links in there to the binaries that you wish to "export" globally. I do this with a handful of programs that I wanted to use independently of Conda (e.g., emacs) even though they are installed and managed by Conda.
Dedicated Env
If you plan to do this with a bunch of software, then it might work to dedicate an env to such global software and just add its whole ./bin dir to $PATH. Do not do this with base - Conda wants to strictly manage that itself since Conda v4.4. Furthermore, do not do this with anything Python-related: stick strictly to native (compiled) software (e.g., htop is a good example). If an additional Python of the same version ends up on your $PATH this can create a mess in library loading. I've never attempted this and prefer the manual linking because I know exactly what I'm exporting.
Related
I have a conda environment with packages installed via conda install. I also have two local development packages that were each installed with pip install -e .. Now, conda env export shows everything, including both local development packages. However, I don't want conda to include them when creating the same environment on other machines - I want to keep doing it via pip install -e ..
So how can I exclude both local packages when creating the environment.yml file? Do I need to manually remove them or is there a way to this from the command line?
While there are some alternative flags for conda env export that change output behavior (e.g., --from-history is most notable), there really isn't anything as specific as OP describes. Instead, manually remove the offending lines.
Note that YAMLs do support all pip install commands, so the editable installs can also be included. For example, https://stackoverflow.com/a/59864744/570918.
Consider Prioritizing the YAML Specification
In a software engineering setting, I would expect that users should not even be hitting development environments with conda install or pip install commands. Instead, the team should have a maunally-written, version-controlled YAML to begin with and all installations/changes to the environment are managed through editing the YAML file and using conda env update to propagate changes in the YAML to the environment.
That is, conda env export should not be necessary because the environment already has a well-defined means of creation.
I'm new to anaconda and conda. I have created an identical environment in two different directories. Is it safe to just delete the env folder or the environment that I no longer need, or do I need to do something in the anaconda prompt to remove the environment thoroughly? I'm not sure if creating an environment in a local folder leaves a trace in the registry or somewhere else in the computer that needs to be removed too?
conda remove --name myenv --all
Another option is
conda env remove --name myenv
Effectively no difference from the accepted answer, but personally I prefer to use conda env commands when operating on whole envs, and reserve conda remove for managing individual packages.
The difference between these and deleting the folder manually is that Conda provides action hooks for when packages are removed, and so allows packages to execute pre-unlink and post-unlink scripts.
I have a environment.yml file which I used to create a Python environment using:
conda env create --file environment.yml.
After the environment is created, I need to perform some operations (such as registering a kernel with jupiter-lab):
ipython kernel install --name=to_the_edge
I would like to embed one or more shell commands to run "post install" so that the setup is self-contained within the .yml file. Is there a way to do this? Or is there a different way within conda to get close to what I'm after?
I would also like a way to specify shell commands to be run after conda activate, but that's a secondary hope.
Maybe this isn't possible because conda works cross platform?
This isn't really possible with standard Conda commands, but there are some options to obtain such functionality.
Jupyter and Conda
The best practice for Jupyter and Conda is to have a single env that has jupyter installed and also has nb_conda_kernels. You always launch jupyter notebook from this env. The nb_conda_kernels package enables Jupyter to automatically detect any other envs that have ipykernel (or other language equivalents, e.g., r-irkernel). Hence, you don't need any additional registration, but simply need to include ipykernel in the YAML. See the docs for nb_conda_kernels.
Running scripts at install
This cannot be done from a YAML. However, you could build your own custom package that does this at install time and then include that in your YAML. You would have to provide the .sh, .bat, etc. to run the commands. See the documentation on adding pre-link, post-link, and unlinked scripts to a package recipe.
Through this route, you can also add activate and deactivate scripts that are run when the env is activated and deactivated, respectively. You can also add such scripts manually, i.e., without a custom package. For example, the docs show how to define environment variables at activation, but you can run arbitrary scripts.
I am running different conda env and I'd like to specify where the packages are downloaded to, rather than having all of them in my $home.
I have found this question which, at time of writing, has no answers. However, my question is different: I don't want to specify the pkg_dir in my .condarc because I want to have a different download dir for each project (space is not an issue).
How do I define the pkg_dir for a specific conda env?
To note, I'm creating environments as conda env create -f my_env.yml -p complete-env.
A fundamental concept of conda is that packages get downloaded and extracted to a shared cache, from which they are selectively linked into the different conda environments. You want to work against this concept, so whatever you do will be hacky and have repercussions.
You could install a separate Miniconda for each of your projects, and (try to) make sure that they don't know about eachother by removing all conda-related files and environment settings from your home directory, or even use a different HOME for each project. Before working with a project, you'd have to put the correct conda on the PATH.
Or you could install Miniconda on a dedicated drive apart from your home directory, and put the conda environments inside your home directory. That would prevent conda from hard-linking the files. It would still download the packages into the shared cache, but then copy only the relevant files into each of your projects. Of course, copying is slower than hard-linking.
Specifying the package directory per environment rather than per conda installation is not possible, as darthbith has already pointed out in a comment.
I noticed that when a conda environment is created without specifying the python version:
conda create --name snowflakes
instead of:
conda create --name snowflakes python=3.6
the environments are not separated and share the package with the default python interpreter.
Thereupon, What is the use of non-separated anaconda environments?
EDIT - 20170824:
The question has been solved. Actually non-separated environments do not exist. With the first command there is no new Python interpreter installed so that it calls the first that it finds in the PATH being the standard Python interpreter because there is no other.
I think you are misunderstanding the word "separate" in the docs. In the docs, they mean "separate" in the sense of "create a new environment, with a new name to try some new things". They do not mean that you are creating a different kind of conda environment. There is only one kind of environment in conda, what you are calling the "separated" environment. All packages in all environments are always unique. It so happens that the first command creates an empty environment with no packages. Therefore, when the new environment is activated, the PATH environment variable looks like: ~/miniconda3/envs/snowflakes/bin:~/miniconda3/bin:... Now, since there is no Python installed into ~/miniconda3/envs/snowflakes/bin (because the snowflakes environment is empty), the shell still finds Python in ~/miniconda3/bin as first on the path. The snowflakes environment does not share with the root environment. For instance, if, after creating, you type conda install -n snowflakes python it will install a new version of Python that won't find any packages! Therefore, there is only one kind of environment in conda, what you are calling the "separated" environment.