Activate/deactivate conda virtualenvs on entering/leaving directories - bash

pyenv-virtualenv offers a nice way of activating the environment on the very instant of entering or leaving the directory which contains a .python-version text file which specifies the environment to activate. It works for the directory it is in and all directories contained in it.
The environment is deactivated once we change the directory to something above it. This allows to easily switch between projects or analyses using different python versions (just by changing the directories).
Is there a way of achieving the same behaviour with (ana)conda?
Edit: added bash tag, because - as far as I understand - pyenv achieves this by hooking a custom script into .bashrc (which allows it to monitor the directory changes). If there is no build-in way in conda, how to create a script which would make it possible?

As mentioned in my comment, this is currently not supported. There is however an open issue on conda's GitHub asking for this feature.
In the meantime you could use autoenv, a small tool that'll automatically run the code in a .env file when entering a directory and that in a .env.leave when leaving the directory (supports bash/zsh and a couple others).
A simple example taken from their readme which illustrates the feature quite nicely:
$ echo "echo 'whoa'" > project/.env
$ cd project
whoa
To load a conda environment your .env would simply look like this:
conda activate <my_env>
Note 1: Check out the Configuration section of their GitHub readme before you start using it.
Note 2: The author of autoenv actually suggests trying direnv instead. However I've never used it, so I can't comment on it.
From autoenv's readme:
you should probably use direnv instead. Simply put, it is higher quality software. But, autoenv is still great, too. Maybe try both? :)

Related

Layman's explanation of environment variables and OS shells

Web development is new to me and I'm trying to grasp the meaning and usage of environment variables once and for all. In my research the most simple explanation I've come across is that it is comparable to 'configuration settings'.
Through the terminal I've been exploring what feels like the computer's innards by typing printenv etc etc.
But I'm still not sure when it is necessary to set up env var. For example, I use fish as my shell. Often when I try to do an npm install it seems like the package didn't take. Here is a recent example:
user#iMac-van-user ~/P/v/v/public_html> npm install -g modernizr
/usr/local/Cellar/node/10.4.0/bin/modernizr -> /usr/local/Cellar/node/10.4.0/lib/node_modules/modernizr/bin/modernizr
+ modernizr#3.6.0
updated 1 package in 2.316s
user#iMac-van-user ~/P/v/v/public_html> modernizr
When I try to use modernizr as a command fish will tell me the modernizr is an unknown command and the color remains red. Valid commands show up in white in fish. Thus I have a suspicion that modernizr will only be available and valid once I've set up the configs. I've had this happen many times with various attempts to install package managers and things like composer, vue-cli, etc. My failures to get it working boils down to my meager knowledge of environment variables and what they do, I think.
This is from the documentation on the modernizr site: modernizr -c modernizr-config.json
Note that you will need to give the command line config the file path
to the configuration you downloaded from the site. In the above
example, we are running the modernizr command from the same folder
that we downloaded the modernizr-config.json file to.
What does the sentence: "Note that you will need to give the command line config the file path to the configuration you downloaded from the site" imply? I copied the file into my project folder but there is no change.
Is there is someone that can explain the following to me in layman's terms, so like you would to a 5 year old, it would be great:
use of environment variables
setting up configs - why, where and how (I've done it once through VIM)
how to know when to set up environment variables
Thank you in advance.
Developing on macOS 10.

What is the difference between activating an anaconda environment and running its python executable directly?

I have setup multiple python environment using Anaconda.
Usually, to run a script "manually", I would open a command line and then type:
activate my-env
python path/to/my/script.py
Fine.
Now I am trying to run a script automatically using a scheduler and I was wondering what the difference was between
Writing a batch which activates the environment and the executes the script (like in the snippet above)
Calling directly the python executable from the environment (within the envs/my-enjv/ directory) like below:
/path/to/envs/my-env/python.exe path/to/my/script.py
Both seem to work fine. Is there any difference?
I don't claim to be an expert but here's my 2 cents.
For small scripts, no, there isn't a difference.
You should notice a difference when calling external modules / packages. conda activate alters the system path to change how the command shell searches for the appropriate capabilities.
If you supply a full path to an interpreter and the full path to an isolated script, then the shell doesn't need to do a lookup as this has priority over the path. This means you could be in a situation where the interpreter can see the script but cannot see dependencies.
If you follow the conda activate process, and the environment is correctly packaged, then the shell will be able to trace any additional resources.
EDIT: The idea behind this is portability. If an admin has been careful in setting up a system, then scripts should have the appropriate visibility - i.e. see everything in it's environment plus everything in the main system installation.
It's possible to full-path every call to an interpreter and a script or package location, but then what happens when you need to move it to another machine? You would need to spend a lot of time setting everything up exactly as it was before. On the other hand, you can follow the package process and the system path will trace everything for you.
Simply checkout the PATH variable in your environment. After conda activation it has been extended by
\Anaconda3;
\Anaconda3\Library\mingw-w64\bin;
\Anaconda3\Library\usr\bin;
\Anaconda3\Library\bin;
\Anaconda3\Scripts;
\Anaconda3\bin;
This doesn't make much of a difference, if you are just using the standard library in your code. However, if you rely on external packages like pandas, it's a prerequisite so that the modules can be found.

Making Sphinx documentation inside of a virtual environment with cron

I have an application development server that is automatically updated every night with a massive shell script that we run with crontab. The script specifies #!/bin/sh at the top of the file and I am not able to change that. The basic purpose of the script is to go through the machine and download the latest code in each of the directories that we list in the script. After all of the repositories are updated, we execute a number of scripts to update the relevant databases using the appropriate virtual environment (Django manage.py commands) by calling that virtualenv's python directly.
The issue that I am having is that we have all the necessary Sphinx plugins installed in one of the virtual environments to allow us to build the documentation from the code at the end of the script, but I cannot seem to figure out how to allow the make command to run inside of the virtualenv so that it has access to the proper packages and libraries. I need a way to run the make command inside of the virtual environment and if necessary deactivate that environment afterwards so that the remainder of the script can run.
My current script looks like the below and gives errors on the latter 3 lines, because sh does not have workon or deactivate, and because make can't find the sphinx-build.
cd ${_proj_root}/dev/docs
workon dev
make clean && make html
deactivate
I was able to find the answer to this question here. The error message that is shown when you attempt to build the sphinx documentation from the root is as follows, and leads to the answer that was provided there:
Makefile:12: *** The 'sphinx-build' command was not found. Make sure
you have Sphinx installed, then set the SPHINXBUILD environment
variable to point to the full path of the 'sphinx-build' executable.
Alternatively you can add the directory with the executable to your
PATH. If you don't have Sphinx installed, grab it from
http://sphinx-doc.org/. Stop.
The full command for anyone looking to build sphinx documentation through a cron when all tools are installed in various virtual environments are listed below. You can find the location of your python and sphinx-build commands by using which while the environment is activated.
make html SPHINXBUILD='<virtualenv-path-to>/python <virtualenv-path-to>/sphinx-build'

Source ~/.vim directory from another location than home directory

To install ~/.vimrc on a system-wide level, I execute vim --version and look where I can put it.
But what steps do I have to take in order to change the location of the /.vim directory containing e.g. autoload with pathogen, plugins, etc.? The more I think about it, the less I understand why it is sourced at startup anyhow.
take a look the /etc/vimrc and /usr/share/vim or /usr/share/vim74 But it is not good to change those files. Because next time you upgrade vim, those files could be overwritten by your package manager, unless you do this manually or never update vim.
edit
Just noticed that the question was tagged with osx.... the above text is based on linux. If it doesn't help, I am gonna remove it.

Creating .deb to install bash script program

I was wondering if the following is possible.
I have a BASH script that I want to make available for some people but I wanted them to only have to "install" the program and not messing around with terminal, so I thought a .deb would be cool.
So what would the "install" do?
Simple. I want to move the script and an icon to a folder (any folder, but I was wondering some hidden folder in Home) and then run a script that creates a launcher in the Applications menu for the first script. It seems there isn't much to it, but for what I've searched, there doesn't seem to be a lot of info...
How can I accomplish this?
By the way, I'm using Ubuntu 11.04.
Basically (install and) run dh-make to set up the debian/ directory, edit the generated files (mainly remove the many you do not need, and fill in a package description and any dependencies in debian/control), then debuild-us -uc -b.
You may also have to set up a simple Makefile for debian/rules to call; it probably only needs an install target to copy the binary to $(DESTDIR)/usr/bin.
Binaries install into /usr/bin and you should not try to override that. The way to have a menu is to add a .desktop file.
Once you have a good .deb you will need to set up a repo for distributing it. The simplest solution is probably to set up a launchpad.net account and create a personal PPA there.
It's not hard to find more information on these topics, but of course, you need to know what to look for. The canonical documentation is the Debian New Maintainer's Guide.
Found this video on youtube that explains IN FULL the process of creating a *.deb for a script or program and even mentions how to do it for a C program.
Full guide in how to build simple *.deb package
Has one bug, btw, that the author, during the making of the *.deb, didn't notice. The path in the *.desktop file for the EXEC parameter is wrong in the example.

Resources