Conda: How to prevent mkl to openblas switch - anaconda

Conda wants to downgrade my blas, lapack etc. packages from an mkl to an openblas version. I understand that conda juggling with mkl versus openblas seems not an uncommon issue. Yet, I have not found a solution to do the job for me. I have these packages installed
blas 2.113 mkl conda-forge
blas-devel 3.9.0 13_linux64_mkl conda-forge
libblas 3.9.0 13_linux64_mkl conda-forge
libcblas 3.9.0 13_linux64_mkl conda-forge
liblapack 3.9.0 13_linux64_mkl conda-forge
liblapacke 3.9.0 13_linux64_mkl conda-forge
mkl 2022.0.1 h06a4308_117
mkl-devel 2022.0.1 h66538d2_117
mkl-include 2022.0.1 h06a4308_117
mkl-service 2.4.0 py39h404a4ab_0 conda-forge
mkl_fft 1.3.1 py39h6964271_2 conda-forge
mkl_random 1.2.2 py39h8b66066_1 conda-forge
and I have a .condarc (on linux) containing
channels:
- conda-forge
- defaults
dependencies:
- python>=3.6
- numpy>=1.13
- scipy>=0.18
- cython>=0.29
- mkl
- mkl-devel
- libblas=*=*mkl
- bottleneck
- pip
- setuptools>=30.3.0
- h5py
- pyyaml
- pytest
ssl_verify: true
auto_activate_base: false
Moreover in the conda-meta directories I have a pinned file, containing the line libblas=*=*mkl. Yet, upon conda update --all this is suggested:
The following packages will be DOWNGRADED:
... other pkgs ...
libblas 3.9.0-13_linux64_mkl --> 3.9.0-13_linux64_openblas
libcblas 3.9.0-13_linux64_mkl --> 3.9.0-13_linux64_openblas
liblapack 3.9.0-13_linux64_mkl --> 3.9.0-13_linux64_openblas
Why, despite of the the .condarc and pinned files, am I getting this switch from mkl to openblas, and what else can I do to prevent it?

I'm posting a quirky workaround for my own OP. This finally worked for me.
First, I could not convince conda to respect the channel ordering in .condarc nor the content in the pinned files before running the update by any other suggestion, including those found outside of stackoverflow .
Second, I stored a conda list | grep mkl, a conda list | grep intel, and a conda list | grep open away for later reference. Then I "gave in" and let the "upgrade" happen, running conda update --all. No need to mention that after that, my environment indeed showed the unwanted replacement of all mkl-type libraries with openblas stuff.
Third, and within the openblas-infested environment I "re-installed" mkl
conda install blas=*=*mkl
conda install libblas=*=*mkl
conda update numpy
conda update scipy
conda install intel-openmp # the "update" had also removed this ...
Also make sure that no openblas-stuff remains by doing a conda remove on whatever related package. (I'm not claiming that really all of the above commands are necessary to reach the original state of the environment regarding mkl. But that's what I did.)
Fourth, comparing with the reference notes from the preceding second step I checked that at this point my environment claimed to be back to "all-mkl". Moreover, using this extremely helpful site http://markus-beuckelmann.de/blog/boosting-numpy-blas.html I also checked, that this was indeed true regarding typical mkl timings to be expected.
On the side, there is a really weird and confusing issue, which may not be related to the OP but which I stumbled across in this context: On the WWW one finds many many many quotes stating, that for numpy or scipy actually "using" mkl, one has to have this kind of output
In []: numpy.show_config()
blas_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/home/abcd/efgh..../lib']
from numpy/scipy.show_confi(). This seems not true in general. In fact when one gets
In []: numpy.show_config()
blas_info:
libraries = ['cblas', 'blas', 'cblas', 'blas']
library_dirs = ['/home/abcd/efgh..../lib']
this is no cause for panic as long as in /home/abcd/efgh..../lib one finds everything linked as
liblapacke.so -> libmkl_rt.so.2*
libblas.so -> libmkl_rt.so.2*
...a.s.o.
which I do.
(conda is just soo painful. sigh)

The package management can be a mess whenever the Intel MKL package has been updated, but the numpy package has not. I may end up with Intel MKL being installed, but numpy is using the openblas. To get this working in my setup, I often do this when creating a new environment:
conda create -y --name test python=3 numpy mkl cmake blas=*=*mkl
so that the MKL is installed at the same time as numpy, and numpy will use the MKL. This often results in an older version of MKL, since numpy was linked against an older version of the MKL.
Without the blas=*=*mkl, I will often end up with an openblas based numpy installed with the newest version of the Intel MKL not being used by anything.
conda create -n test33 numpy mkl blas=*=*mkl
.
.
.
The following NEW packages will be INSTALLED:
blas pkgs/main/osx-64::blas-1.0-mkl
.
numpy pkgs/main/osx-64::numpy-1.21.5-py39h2e5f0a9_1
.
mkl pkgs/main/osx-64::mkl-2021.4.0-hecd8cb5_637
.
# results in:
blas_opt_info:
libraries = ['mkl_rt', 'pthread']
conda create -n test33 numpy mkl
.
.
.
The following NEW packages will be INSTALLED:
blas pkgs/main/osx-64::blas-1.0-openblas
.
numpy pkgs/main/osx-64::numpy-1.21.5-py39h9c3cb84_1
.
mkl pkgs/main/osx-64::mkl-2022.0.0-hecd8cb5_105
.
# results in:
blas_opt_info:
libraries = ['openblas', 'openblas']
Unfortunately, numpy does not appear to show the blas library it is linked against in its version id. Oftentimes, you will end up with both openblas and mkl in your environment.

See the note section at the end of https://conda-forge.org/docs/maintainer/knowledge_base.html#switching-blas-implementation
If you want to commit to a specific blas implementation, you can prevent conda from switching back by pinning the blas implementation in your environment. To commit to mkl, add blas=*=mkl to <conda-root>/envs/<env-name>/conda-meta/pinned, as described in the conda-docs.

Related

Version of a built `conda-forge` package is different between `pip list` and the `conda list` (it should be the same)

I recently added the package typepigeon to conda-forge. On conda-forge it is currently at version 1.0.9; however, when installing typepigeon via conda install, the output of pip list shows its version to be 0.0.0.post2.dev0+a27ab2a instead of 1.0.9.
conda list:
typepigeon 1.0.9 pyhd8ed1ab_0 conda-forge
pip list:
typepigeon 0.0.0.post2.dev0+a27ab2a
I think the issue arises from the way I am assigning the version (I am using dunamai to extract the Git tag as the version number). This version extraction is done within setup.py of typepigeon.
try:
__version__ = Version.from_any_vcs().serialize()
except RuntimeError as error:
warnings.warn(f'{error.__class__.__name__} - {error}')
__version__ = '0.0.0'
When conda-forge builds the feedstock, I think it might be looking at the Git tag of the feedstock repository instead of the version from PyPI (as it is locally executing setup.py).
How can I modify the Conda Forge recipe to force the PyPI version?
I've figured out a solution; it might not be the best possible way to do this, but it works for my workflow.
I injected the version into the setup.py by looking for an environment variable (that I called __version__):
if '__version__' in os.environ:
__version__ = os.environ['__version__']
else:
from dunamai import Version
try:
__version__ = Version.from_any_vcs().serialize()
except RuntimeError as error:
warnings.warn(f'{error.__class__.__name__} - {error}')
__version__ = '0.0.0'
Then, in the conda-forge recipe, I added an environment variable (__version__) to the build step:
build:
noarch: python
script: export __version__={{ version }} && {{ PYTHON }} -m pip install . -vv

how do I update xarray?

How can I update xarray? I tried:
>>> import xarray
>>> xarray.show_versions
<function show_versions at 0x7fcfaf2aa820>
But I cannot find any documentation how to read this, or how to update to a new version of xarray.
I was not the person to install it on the computer, so I do not know if it was through anaconda or something else. Is there a way to find this out?
xarray.show_versions is a function, which prints the versions of xarray and its dependencies.
To get just the version of xarray, you can check the __version__ property of the module.
Updating xarray is best done with pip or conda, depending on how you installed it in the first place.
import xarray as xr
print(xr.__version__)
# '0.18.2'
xr.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.8.8 (default, Feb 19 2021, 18:07:06)
[GCC 8.3.0]
python-bits: 64
OS: Linux
OS-release: 5.11.0-27-generic
machine: x86_64
processor:
byteorder: little
LC_ALL: C.UTF-8
LANG: C.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.0
libnetcdf: 4.7.4
xarray: 0.18.2
pandas: 1.2.4
numpy: 1.20.3
scipy: 1.6.3
netCDF4: 1.5.6
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: 2.8.3
cftime: 1.5.0
nc_time_axis: None
PseudoNetCDF: None
rasterio: 1.2.3
cfgrib: None
iris: None
bottleneck: 1.3.2
dask: 2021.05.0
distributed: 2021.05.0
matplotlib: 3.4.2
cartopy: None
seaborn: None
numbagg: None
pint: None
setuptools: 53.0.0
pip: 21.1.1
conda: None
pytest: None
IPython: 7.23.1
sphinx: None
To update xarray:
pip install --upgrade xarray
or
conda update xarray
To see if it was installed using conda or pip, run conda list xarray. If it was installed using pip, it should state pypi in the Channel column.
This is for those who want to do through GUI and who use software like pycharm, spyder, or other similar softwares.
SO, try finding 'python interpreter' in the settings. Most softwares shows the existing packages, current version,latest version(for example see the image of pycharm)
There is option to select the version that you want. for example there are times, when a module is in its beta phase and is not stable in usage. so, you can specify the latest stable version too. It is applicable for any module and not limited to xarray.

Conflicts during PyGMO installation on Mac OS X 11.2.2 with Anaconda

I am attempting to install PyGMO on Mac OS X 11.2.2 (with Anaconda which I reinstalled so the Anaconda Navigator is now upgraded to 2.0.1.)
After the installation starts, it collects package metadata and reports it found package conflicts. How can I solve the conflict so that I can run PyGMO?
Here is the start:
$ conda install -c conda-forge pygmo
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \
Found conflicts! Looking for incompatible packages.
After few hours, the Terminal returns a long report of conflicts and stops there. Here is a representative piece of output:
Package selectors2 conflicts for:
wurlitzer -> selectors2
spyder-kernels -> wurlitzer[version='>=1.0.3'] -> selectors2
Package mpmath conflicts for:
anaconda==2020.07=py38_0 -> sympy==1.6.1=py38_0 -> mpmath[version='>=0.19']
sympy -> mpmath[version='>=0.19']
anaconda==2020.07=py38_0 -> mpmath==1.1.0=py38_0
Package anyio conflicts for:
jupyterlab -> jupyter_server[version='>=1.4,<2'] -> anyio[version='>=2.0.2|>=2.0.2,<3']
jupyterlab_server -> jupyter_server[version='>=1.4,<2'] -> anyio[version='>=2.0.2|>=2.0.2,<3']
Package py-lief conflicts for:
conda-build -> py-lief
anaconda==2020.07=py38_0 -> py-lief==0.10.1=py38haf313ee_0
Note that strict channel priority may have removed packages required for satisfiability.
I followed the official installation guidelines and set the additional channel and its priority. I also checked this command but that is essentially the same thing. I also tried the installation commands from PyPI. And I tried this hint as well
There are two possible states:
Conda solver is correct. The previous package constraints you have in the environment are incompatible with installing pygmo. In that case, you either need to track down the conflicting constraints and try to manually loosen them (not recommended for Anaconda base), or you need to make a new environment:
conda create -n pygmo_env -c conda-forge pygmo
Include whatever other packages you need in there. E.g., ipykernel if you plan on using it as a Jupyter kernel.
Conda solver is bugging out. The solver is reporting trouble solving when it really shouldn't be. This happens, and especially happens when mixing channels (defaults and conda-forge). Many find Mamba, the drop-in replacement for Conda, to be more reliable (and definitely faster!).
conda install conda-forge::mamba
mamba install -c conda-forge pygmo
Unfortunately, it's hard to tell which state it's in. Many of us have been down the rabbit hole of trying to sort through the constraint reports and sometimes there really isn't a sensible conflict to be found. For practical purposes, I'd recommend trying out mamba. If it also fails, then at least you'll have good evidence that you're in state (1).
Additional Commentary
Despite upbeat documentation about installing from any channel in Anaconda Cloud, an Anaconda distribution is highly constrained - i.e., has too many packages - and only tests for co-installation of packages from the defaults channel. Additionally, Conda Forge and Anaconda have different build stacks, so there can be runtime package incompatibilities even when the solver allows co-installation.
Generally, I'd recommend making liberal use of environment creation. Aim to have separate environments for separate tasks/projects. If you plan on frequently using more than a vanilla Anaconda distribution, consider Miniforge or one of its variants. One can always create an Anaconda environment with conda create -n foo -c defaults anaconda.

How to install deprecated/unsupported Python 3.4 on conda environment?

Since the deprecation of Python 3.4, conda has removed it from its package list. Is there a way, however, that I can install it?
I need it in order to use software written in this older version.
EDIT:
My question is different than the suggested duplicate one, because I am referring to deprecated and unsupported versions. I already know how to create a conda environment with a specific python version, but executing:
conda create --name py34env python=3.4
results in error (listed in the end), which is due to the lack of the package for Python 3.4 .
One can see the currently supported versions of Python by executing: conda search python and can confirm that Python 3.4 is not on the list.
This is the output of the error when trying to create a Python 3.4 conda enviroment:
$ conda create --name py34env python=3.4
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- python=3.4
Current channels:
- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
When Anaconda dropped it's free channel (technically, Conda 4.7+ just no longer looks there), this resulted in some older package versions that had never been ported to main no longer being accessible.
Option 1: Globally enable free channel searching
However, there is an option to restore access to the free channel, namely restore_free_channel.
# Not generally recommended
conda config --set restore_free_channel True
conda create -n py34 python=3.4
This isn't generally recommended (see blog post), but if you will be working in Python v3.4 frequently and will require other older compatible packages, it might be the best option.
Option 2: Temporarily include free channel
A more temporary solution is to include the free channel using the ad hoc --channel,-c argument. For example,
# slightly better
conda create -n py34 -c defaults -c free python=3.4
Note that I include defaults prior to free so that the latter will only be used if the package cannot be sourced from the former. This assumes the channel_priority setting is set to flexible (the default).
Option 3: Use Conda Forge
Alternatively, Conda Forge has Python v3.4.5, and that won't force you to change a global configuration option.
conda create -n py34 -c conda-forge python=3.4

Installing numpy on mac with pip: "requirements already satisfied" but "No module numpy"

I have python2.7.8 on mac, things I did:
sudo easy_install pip - worked.
pip install numpy:
Requirement already satisfied (use --upgrade to upgrade): numpy in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python
I also did "pip upgrade numpy" - no luck. What's wrong?
Your problem is a conflict of different Python versions.
I would recommend installing Python and all the packages, such as numpy, scipy, matplotlib, pandas, etc via Brew
See this tutorial: https://github.com/Homebrew/homebrew/blob/master/share/doc/homebrew/Homebrew-and-Python.md
You can verify which Python you're running with which python or which python3 in Terminal.
This solution is more flexible and cleaner in my opinion than using Conda/Miniconda. However it is also a bit more lengthy to install, as you need to have Xcode, devtools installed to build everything
Could it be that you have multiple versions of python installed? What happens if you run python using the full path like this:
$ /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2
instead of just python2?
In my experience on Mac (and other OS too) it is best to go with Anaconda / Miniconda. This is especially true for packages like NumPy and others from scientific stack.
While Anaconda is a full-blown distribution with about 200 packages, Miniconda is just Python with a few basic libraries. The big advantage is that all packages install as binary. Further, it makes it very simple and stable to install multiple Python versions side by side. For example:
conda create -n py27 python=2.7
creates a new environment with Python 2.7. Activate with:
source activate py27
Now:
conda install numpy
installs NumPy cleanly.
You can do the same for Python 3.5 and switch between environments with source activate.
After jumping from one stackoverflow answer to another I found the solution!
my problems were:
numpy at different location( actually at right, expected-to-be location). It was the IDLE that looks for its own default folder where python2.7 installed.
I checked that my numpy is working like this, run this script to check it is working:
import os
import sys
import pygame
sys.path.insert(0, '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python')
import numpy
pygame.init()
print "( using __version__): " + numpy.__version__
print numpy.version.version
user_paths = os.environ['PYTHONPATH']
print(user_paths)
sys.path insertion adds additional path to IDLE, so it knows where to look for numpy.
Then I check if numpy truly imported - i just print its version. Right now it is 1.8.0rc.
I want to find a way to avoid using this syspath insertion all the time.
So far so good - for now.
I had a similiar problem with numpy. However, it was resolved by choosing the right environment. If you are using VScode, open the command palette (ctrl+shift+p) and type
Python: Select Interpreter.
From there, try choosing the right virtual environment/Interpreter.

Resources