I am trying to build (and later upload) a conda package which would contain my custom program that I have developed in C++.
Simplifying the problem, I have a following meta.yaml:
package:
name: CoolName
version: "1.0.0"
source:
path: ./source
requirements:
build:
- make
and the following build.sh:
make
I have two questions here:
1) How and where should I copy the binary which is a result of the make compilation so that it is indeed recognized upon environment activation?
2) How should I specify g++ as a dependancy? I would like to have this package be later available for linux-64 and osx-64... In the building process (in the Makefile) I am using only g++.
Edit
I have modified my build script to have:
make
mkdir -p $PREFIX/bin
cp my_binary $PREFIX/bin/my_binary
And now the conda-build is successful. However, when I later try to install the package locally with conda install --use-local I get:
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
But this is not true, my binary is not installed anywhere and is not recognized...
How and where should I copy the binary which is a result of the make compilation so that it is indeed recognized upon environment activation?
As you mentioned in your edit, install somewhere within ${PREFIX}
How should I specify g++ as a dependancy?
To use conda-supplied compilers (rather than your system compiler), use this:
requirements:
build:
- {{ compiler('cxx') }}
I would like to have this package be later available for linux-64 and osx-64... In the building process (in the Makefile) I am using only g++.
Note: On Mac, it will use clang++, not g++. Make sure your Makefile respects the ${CXX} environment variable instead of hard-coding g++.
However, when I later try to install the package locally with conda install --use-local I get:
That is strange. conda install --use-local CoolName should do what you want. But here are some things to try:
Double-check the contents of the environment you're trying to install it into:
conda list
Try installing to a fresh environment:
conda create -n my-new-env --use-local CoolName
Delete any obsolete versions of the package you might have created before you successfully built the package:
# Inspect the packages you've created,
# and consider deleting all but the most recent one.
ls $(conda info --base)/conda-bld/linux-64/CoolName*.tar.bz2
...then try running conda install again.
Related
I recently developed a package my_package and am hosting it on GitHub. For easy installation and use, I have following setup.py:
from setuptools import setup
setup(name='my_package',
version='1.0',
description='My super cool package',
url='https://github.com/my_name/my_package',
packages=['my_package'],
python_requieres='3.9',
install_requires=[
'some_package==1.0.0'
])
Now I am trying to install this package in a conda environment:
conda create --name myenv python=3.9
conda activate myenv
pip install git+'https://github.com/my_name/my_package'
So far so good. If I try to use it in the project folder, everything works perfectly. If I try to use the packet outside the project folder (still inside the conda environment), I get the following error:
ModuleNotFoundError: No module named 'my_package'
I am working on windows, if that matters.
EDIT:
I'm verifying that both python and pip are pointing towards the correct version with:
which pip
which python
/c/Anaconda3/envs/my_env/python
/c/Anaconda3/envs/my_env/Scripts/pip
Also, when I run:
pip show my_package
I get a description of my package. So pip finds it, but as soon as I try to import my_package in the script, I get the described error.
I also verified that the package is installed in my environment. So in /c/Anaconda3/envs/my_env/lib/site-packages there is a folder my_package-1.0.dist-info/
Further: python "import sys, print(sys.path)"
shows, among other paths, /c/Anaconda3/envs/my_env/lib/site-packages. So it is in the path.
Check if you are using some explicit shebang in your script pointing to other Python interpreters.
Eg. using the system default Python:
#!/bin/env python
...
While inside your environment myenv, try to uninstall your package first, to do a clean test:
pip uninstall my_package
Also, you have a typo in your setup.py: python_requieres --> python_requires.
And I actually tried to install with your setup.py, and also got ModuleNotFoundError - but because it didn't properly install due to install_requires:
ERROR: Could not find a version that satisfies the requirement some_package==1.0.0
So, check also that everything installs without errors and warnings.
Hope that helps.
First thing I would like to point out (not the solution) regards the following statement you made:
If I try to use it in the project folder [...] If I try to use the packet outside the project folder [...]
I understand "project folder" means the "my_package" folder (inside the git repository). If that is the case, I would like to point out that you are mixing two situations: that of testing a (remote) package installation, while in your (local) repository. Which is not necessarily wrong, but error-prone.
Whenever testing the setup/install process of a package, make sure to move far from your repository (say, "/tmp/" equivalent in Windows) and, preferably, use a fresh environment. That will eliminate "noise" in your tests.
First thing I would tell you to do -- if not already -- is to create a fresh conda env and install your package from an empty/new folder. Eg,
$ conda env create -n test_my_package ipython pip
$ cd /tmp # equivalent temporary or new in your Windows
$ pip install git+https://github.com/my_name/my_package
If that doesn't work (maybe a problem with your pip' git+http code), do another way: create a release for your package (eg, "v1") and then install the released version by indicating the zip package URL (that you get from your "my_package" releases page on Github):
$ pip install https://github.com/my_name/my_package/archive/v1.zip
I am following this wiki page to build perf from source as below:
PYTHON=python3 make -C tools/perf install
where ~/bin will be the default build directory.
How can I change the build directory to let's say ~/bin/test? I already have another perf build in ~/bin, and I want to have the new build in a different directory.
I have tried to modify the Makefile (if that is how to do it), but I could not figure it out.
One last silly question: Can I just move my current perf build to another directory or it will screw up its links?
You should be able to easily install into a different directory by specifying prefix=... or DESTDIR=... when running make. You will see this and other info if you run make -C tools/perf help:
$ make -C tools/perf help
...
Perf install targets:
NOTE: documentation build requires asciidoc, xmlto packages to be installed
HINT: use "prefix" or "DESTDIR" to install to a particular
path like "make prefix=/usr/local install install-doc"
install - install compiled binaries
...
Make sure to pass an absolute path to avoid problems (you can use realpath for that):
PYTHON=python3 make -C tools/perf prefix=$(realpath ~/bin/test) install
Using Windows 10 64-bit, Cabal-3.4.0.0, ghc-8.10.7.
I installed OpenBLAS in MSYS2 environment with command
pacman -S mingw-w64-x86_64-openblas.
Than, I successfully installed hmatrix-0.20.2 with command
cabal install --lib hmatrix --flags=openblas --extra-include-dirs="C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\bin" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\lib"
I am trying to build simple test project using cabal build cabalhmatrix with Main
module Main where
import Numeric.LinearAlgebra
main :: IO ()
main = do
putStrLn $ show $ vector [1,2,3] * vector [3,0,-2]
But now I am getting output
Resolving dependencies...
Build profile: -w ghc-8.10.7 -O1
In order, the following will be built (use -v for more details):
- hmatrix-0.20.2 (lib) (requires build)
- cabalhmatrix-0.1.0.0 (exe:cabalhmatrix) (first run)
Starting hmatrix-0.20.2 (lib)
Failed to build hmatrix-0.20.2. The failure occurred during the configure
step.
Build log (
C:\cabal\logs\ghc-8.10.7\hmatrix-0.20.2-6dd2e8f2795550e4dd624770ac98c326dacc0cac.log
):
Warning: hmatrix.cabal:21:28: Packages with 'cabal-version: 1.12' or later
should specify a specific version of the Cabal spec of the form
'cabal-version: x.y'. Use 'cabal-version: 1.18'.
Configuring library for hmatrix-0.20.2..
cabal-3.4.0.0.exe: Missing dependencies on foreign libraries:
* Missing (or bad) C libraries: blas, lapack
This problem can usually be solved by installing the system packages that
provide these libraries (you may need the "-dev" versions). If the libraries
are already installed but in a non-standard location then you can use the
flags --extra-include-dirs= and --extra-lib-dirs= to specify where they are.If
the library files do exist, it may contain errors that are caught by the C
compiler at the preprocessing stage. In this case you can re-run configure
with the verbosity flag -v3 to see the error messages.
cabal-3.4.0.0.exe: Failed to build hmatrix-0.20.2 (which is required by
exe:cabalhmatrix from cabalhmatrix-0.1.0.0). See the build log above for
details.
What should I do to correctly build that package?
I guess I need to somehow pass arguments --flags=openblas --extra-include-dirs="C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\bin" --extra-lib-dirs="C:\\ghcup\\msys64\\mingw64\\lib" to hmatrix during compilation, but don't know how to do that. To be honest, I don't understand for what program exactly are those arguments (cabal, ghc, ghc-pkg or something else) and why cabal is trying to install hmatrix again. I see hmatrix in directory "C:\cabal\store\ghc-8.10.7\hmatrix-0.20.2-e917eca0fc7690010007a19f4f2a3602d86df0f0".
Created cabal.project file:
packages: .
package hmatrix
flags: +openblas
extra-include-dirs: C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS
extra-lib-dirs: C:\\ghcup\\msys64\\mingw64\\bin, C:\\ghcup\\msys64\\mingw64\\libenter code here
After adding libopenblas.dll location to PATH variable cabal project is working.
Even though there is the --lib flag, it's generally best to work under the assumption that Cabal doesn't do library installs. Never install a library, instead just depend on it – and have Cabal install, update etc. it whenever necessary.
But then how can you pass the necessary flags? With a cabal.project file.
packages: .
package hmatrix
flags: openblas
extra-include-dirs: C:\\ghcup\\msys64\\mingw64\\include\\OpenBLAS
...
Put this file in the working directory of your own project, together with cabalhmatrix.cabal. Then running cabal build in that directory will use a hmatrix install with the suitable library etc. flags.
I am having a similar issue to this problem.
I want to download Meson for Windows and used the following command:
pip3 install meson
This installs in my site-packages folder, specifically c:\users\user\appdata\local\packages\pythonsoftwarefoundation.python.3.8_qbz5n2kfra8p0\localcache\local-packages\python38\site-packages\mesonbuild
However, running meson or python3 meson.py results in an error:
'meson' is not recognized as an internal or external command, operable
program or batch file.
When looking at the mesonbuild directory within site-packages, I seem to be missing the meson or meson.py file. Has anybody ever come across this issue before?
After opening up Visual Studio, and looking at the installed Python packages in my environment, I noticed this interesting information window above the list of my Python packages:
Due to new security restrictions, installing from the internet may not
work on this version of Python.
After seeing this, I decided to install Meson through the website's MSI installer. Indeed, after trying to download the installer, Windows threw up all kinds of security warnings and "are you sure you want to do this" notifications before I convinced Windows that I really did want to install Meson.
I just wanted to share this with anybody that might have the same issues. The MSI installer worked for my needs.
Try the following :
python3 -m mesonbuild.mesonmain build
Meson pip package contains meson and mesonbuild modules. The meson module serves as Python entry point, which, during an initial execution of setup.py, associates mesonbuild.mesonmain:main with command line name 'meson'. (Explain Python entry points?). To invoke meson via python3 use python3 -m mesonbuild.mesonmain build, which writes build config into 'build' directory (provided that there is meson.build file in the current directory.) There is no such file 'meson.py' in mesonbuild module and meson module does not contain any Python code.
I'm working on a support library for a large Python project which heavily uses relative imports by appending various project directories to sys.path.
Using The Hitchhiker's Guide to Packaging as a template I attempted to create a package structure which will allow me to do a local install, but can easily be changed to a global install later if desired.
One of the dependencies of my package is the pyasn1 package for the encoding and decoding of ASN.1 annotated objects. I have to include the pyasn1 library separately as the version supported by the CentOS 6.3 default repositories is one major version back and has known bugs that will break my custom package.
The top-level of the library structure is as follows:
MyLibrary/
setup.py
setup.cfg
LICENSE.txt
README.txt
MyCustomPackage/
pyasn1-0.1.6/
In my setup configuration file I define the install directory for my library to be a local directory called .lib. This is desirable as it allows me to do absolute imports by running the command import site; site.addsitedir("MyLibrary/.lib") in the project's main application without requiring our engineers to pass command line arguments to the setup script.
setup.cfg
[install]
install-lib=.lib
setup.py
setup(
name='MyLibrary',
version='0.1a',
package_dir = {'pyasn1': 'pyasn1-0.1.6/pyasn1'},
packages=[
'MyCustomPackage',
'pyasn1',
'pyasn1.codec',
'pyasn1.compat','
pyasn1.codec.ber',
'pyasn1.codec.cer',
'pyasn1.codec.der',
'pyasn1.type'
],
license='',
long_description=open('README.txt').read(),
data_files = []
)
The problem I've run into with doing the installation this way is that when my package tries to import pyasn1 it imports the global version and ignores the locally installed version.
As a possible workaround I have tried installing the pyasn1 package under a different name than the global package (eg pyasn1_0_1_6) by doing package_dir = {'pyasn1_0_1_6':'pyasn1-0.1.6/pyasn1'}. However, this fails since the imports used internally to the pyasn1 package do not use the pyasn1_0_1_6 name.
Is there some way to either a) force Python to import a locally installed package over a globally installed one or b) force a package to install under a different name?
Use virtualenv to ensure that your application runs in a fully known configuration which is independent from the OS version of libraries.
EDIT: a quick (unix) solution is setting the PYTHONPATH environment variable, which works just like PATH for Python modules (module loaded from first path in which is found, so simply append you directory at the beginning of the PYTHONPATH). Anwyay, I strongly recommend you to proceed with virtualenv, since it was specifically engineered for handling situations like the one you are facing.
Rationale
The process is easily automatable if you write a setuptools script specifying dependencies with install_requires. For a complete example, refer to this one I wrote
Setup
Note that you can easily insert the steps below in a setup.sh shell script.
First create a virtualenv and enter it:
$ virtualenv $name
$ cd $name
Activate it:
$ source bin/activate
Now cd to your project directory and run the installer script:
$ cd $my_project_dir
$ python ./setup.py --prefix $path_to_virtualenv
Note the --prefix $path_to_virtualenv, which is used to tell the script to install in the virtualenv instead of system-wide. Call this after activating the virtualenv. Note that all the depencies are automatically downloaded and installed in the virtualenv.
Then you are done. When you want to leave the virtualenv, issue:
$ deactivate
On subsequent calls, you will only need to activate the virtualenv (step 2), maybe using a runawesomeproject.sh if you really want.
As noted on the virtualenv website, you should use virtualenv >= 1.9, as the previous versions did not download dependencies via HTTPS. If you consider plain HTTP to be sufficient, then any version should do.
You might also try relocatable virtualenvs: setup it and copy the folder to your host. Anyway, note that this feature is still experimental.