I'm trying to build a new conda package. I used conda skeleton to create a meta.yaml file which I edited to my needs:
package:
name: ofl
version: "master"
source:
git_url: git#github.com:fergalm/Fireworks.git
git_tag: master
build:
# noarch: python
# preserve_egg_dir: True
entry_points:
# Put any entry points (scripts to be generated automatically) here. The
# syntax is module:function. For example
#
# - pyinstrument = pyinstrument:main
#
# Would create an entry point called pyinstrument that calls pyinstrument.main()
# If this is a new build for the same version, increment the build
# number. If you do not include this key, it defaults to 0.
# number: 1
requirements:
build:
- python
- setuptools
run:
- python
- boto3
- yaml
- dateutil
I can build this package with conda-build ofl, but when I try to install it with conda install ofl --use-local I get the following error
Fetching package metadata .............
Solving package specifications: .
UnsatisfiableError: The following specifications were found to be in conflict:
- ofl
Use "conda info <package>" to see the dependencies for each package.
What does it mean for my package, ofl, to be in conflict with itself? What do I need to do to fix this?
Related
I'm reinstalling Conda after a PC factory reset and trying to re-create an old conda environment from a yml file that I created by
conda env export --prefix $path_to_old_env_dir > voice_dep.yml
The resulting yml file looks ok to me, here's what it looks like:
name: voiceeda
channels:
- defaults
- conda-forge
dependencies:
- ca-certificates=2022.12.7=h5b45459_0
- libsqlite=3.40.0=hcfcfb64_0
- openssl=1.1.1s=hcfcfb64_1
- pip=22.3.1=pyhd8ed1ab_0
- python=3.9.13=h6244533_2
- setuptools=66.1.1=pyhd8ed1ab_0
- sqlite=3.40.0=hcfcfb64_0
- tzdata=2022g=h191b570_0
- ucrt=10.0.22621.0=h57928b3_0
- vc=14.3=hb6edc58_10
- vs2015_runtime=14.34.31931=h4c5c07a_10
- wheel=0.38.4=pyhd8ed1ab_0
- pip:
- anyio==3.6.2
- argon2-cffi==21.3.0
- argon2-cffi-bindings==21.2.0
- arrow==1.2.3
...
but when I try to run
conda create -n voiceeda -f voice_dep.yml
The following odd error occurs.
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- voice_dep.yml
I'd understand if it wasn't finding a particular package, I can remove versions etc. if so, but why is it saying it can't find the yml file itself? I'm very confused, wondering if I missed a crucial setup step during conda installation or smth? I'm on Windows 10, and installed anaconda to a D drive (conda version 23.1.0 & Python 3.9.13.).
Any help would be much appreciated, thank you!
I recently added the package typepigeon to conda-forge. On conda-forge it is currently at version 1.0.9; however, when installing typepigeon via conda install, the output of pip list shows its version to be 0.0.0.post2.dev0+a27ab2a instead of 1.0.9.
conda list:
typepigeon 1.0.9 pyhd8ed1ab_0 conda-forge
pip list:
typepigeon 0.0.0.post2.dev0+a27ab2a
I think the issue arises from the way I am assigning the version (I am using dunamai to extract the Git tag as the version number). This version extraction is done within setup.py of typepigeon.
try:
__version__ = Version.from_any_vcs().serialize()
except RuntimeError as error:
warnings.warn(f'{error.__class__.__name__} - {error}')
__version__ = '0.0.0'
When conda-forge builds the feedstock, I think it might be looking at the Git tag of the feedstock repository instead of the version from PyPI (as it is locally executing setup.py).
How can I modify the Conda Forge recipe to force the PyPI version?
I've figured out a solution; it might not be the best possible way to do this, but it works for my workflow.
I injected the version into the setup.py by looking for an environment variable (that I called __version__):
if '__version__' in os.environ:
__version__ = os.environ['__version__']
else:
from dunamai import Version
try:
__version__ = Version.from_any_vcs().serialize()
except RuntimeError as error:
warnings.warn(f'{error.__class__.__name__} - {error}')
__version__ = '0.0.0'
Then, in the conda-forge recipe, I added an environment variable (__version__) to the build step:
build:
noarch: python
script: export __version__={{ version }} && {{ PYTHON }} -m pip install . -vv
I have a CodePipline that grabs code out of CodeCommit bundles it up in CodeBuild and then publishes it via CloudFormation.
I want to use the Python package gspread and because it's not part of the standard AWS Linux image I need to install it.
Currently when the code is run I get the error:
[ERROR] Runtime.ImportModuleError: Unable to import module 'index': No module named 'gspread'
Code structure
- buildspec.yml
- template.yml
package/
- gspread/
- gspread-3.6.0.dist-info/
- (37 other python packages)
source/
- index.py
buildspec.yml -- EDITED
version: 0.2
phases:
install:
commands:
# Use Install phase to install packages or any pre-reqs you may need throughout the build (e.g. dev deps, security checks, etc.)
- echo "[Install phase]"
- pip install --upgrade pip
- pip install --upgrade aws-sam-cli
- sam --version
- cd source
- ls
- pip install --target . gspread oauth2client
# consider using pipenv to install everything in the environement and then copy the files installed into the /source folder
- ls
runtime-versions:
python: 3.8
pre_build:
commands:
# Use Pre-Build phase to run tests, install any code deps or any other customization before build
# - echo "[Pre-Build phase]"
build:
commands:
- cd ..
- sam build
post_build:
commands:
# Use Post Build for notifications, git tags and any further customization after build
- echo "[Post-Build phase]"
- export BUCKET=property-tax-invoice-publisher-deployment
- sam package --template-file template.yml --s3-bucket $BUCKET --output-template-file outputtemplate.yml
- echo "SAM packaging completed on `date`"
##################################
# Build Artifacts to be uploaded #
##################################
artifacts:
files:
- outputtemplate.yml
discard-paths: yes
cache:
paths:
# List of path that CodeBuild will upload to S3 Bucket and use in subsequent runs to speed up Builds
- '/root/.cache/pip'
The index.py file has more in it than this. But to show the offending line.
-- index.py --
import os
import boto3
import io
import sys
import csv
import json
import smtplib
import gspread #**<--- Right here**
def lambda_handler(event, context):
print("In lambda_handler")
What I've tried
Creating the /package folder and committing the gspread and other packages
Running "pip install gspread" in the CodeBuild builds: commands:
At the moment, I'm installing it everywhere and seeing what sticks. (nothing is currently sticking)
Version: Python 3.8
I think you may need to do the following steps :
Use virtual env to install the packages locally.
Create requirements.txt to let code build know of the package requirement.
In CodeBuild buildspec.xml , include commands to install virutal env and then supply requirements.txt.
pre_build:
commands:
pip install virtualenv
virtualenv env
. env/bin/activate
pip install -r requirements.txt
Detailed steps here for reference :
https://adrian.tengamnuay.me/programming/2018/07/01/continuous-deployment-with-aws-lambda/
Why is anaconda choking on common packages, in creating an envionment from a YML file? Anaconda COMES with these packages pre-installed in root (or so I thought?)
YML file:
---
name: rasterenv
channels:
- conda-forge
dependencies:
- gdal>=2.2.3
- rasterio
- cython
- jupyter
- matplotlib
- numpy
- pyproj
- shapely
- rasterio
- pandas
- geopandas
- os
- matplotlib
- seaborn
- fiona
- OSMnx
- pip:
- pygeotools
- pygeoprocessing
Trying to build file with: conda env create -f path/to/file
If I create an enviornment with JUST uncommon packages like rasterio, it appears to work. BUT, I want an environment with all! What gives here?
Error is:
ResolvePackageNotFound:
- os
If I remove os from the list, the error then becomes:
ResolvePackageNotFound:
- matplotlib
As #sinoroc pointed out in the comments, os is part of Python standard library and should not be listed as a dependency. (When you do define it as a dependency, Python is going to look for a package called os on all available repositories [PyPI or anaconda.org in this case] and won't find it.)
You can see which packages are part of the standard library by checking the docs here: https://docs.python.org/3/library/
(Also there have been a few questions on SO on how to find out if a particular package is part of the std lib, e.g. How to check if a module/library/package is part of the python standard library?) When you create a new environment the packages from the std lib are the only ones which are available by default. Anything else needs to be installed.
Additionally there are two packages in your yaml file that are listed twice (rasterio and matplotlib) which makes me think that you manually created that file. You can generate a conda environment file by activating an environment and running conda env export > environment.yml which will create a file called environment.yml with all required dependencies.
I'm using two environment of conda. I can not intall packages in one env, while I can intall packages in the other environment.
The error massage is: 'solving environment: failed'
system: windows 10 x64
The error msg:
(py3env) C:\>conda install cython
Collecting package metadata (current_repodata.json): done
Solving environment: failed
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- anaconda/pkgs/free/win-64::protobuf==3.2.0=py36_0 -> libprotobuf==3.2.0
- anaconda/pkgs/free/win-64::tensorflow==1.2.1=py36_0 -> backports.weakref==1.0rc1
- anaconda/pkgs/free/win-64::tensorflow==1.2.1=py36_0 -> bleach==1.5.0
- anaconda/pkgs/free/win-64::tensorflow==1.2.1=py36_0 -> html5lib==0.9999999
Current channels:
- https://repo.anaconda.com/pkgs/main/win-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/win-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/msys2/win-64
- https://repo.anaconda.com/pkgs/msys2/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
While the success info in another environment:
(py2env) C:\>conda install cython Collecting package metadata (current_repodata.json): done Solving environment: done
## Package Plan ##
environment location: C:\Users\sonic\Anaconda3\envs\py2env
added / updated specs:
- cython
The following packages will be downloaded:
package | build
---------------------------|-----------------
certifi-2019.6.16 | py27_0 151 KB
cython-0.29.11 | py27hc56fc5f_0 2.0 MB
------------------------------------------------------------
Total: 2.1 MB
The following NEW packages will be INSTALLED:
cython pkgs/main/win-64::cython-0.29.11-py27hc56fc5f_0
The following packages will be UPDATED:
certifi anaconda/pkgs/free::certifi-2016.2.28~ --> pkgs/main::certifi-2019.6.16-py27_0
Proceed ([y]/n)?
I think it is because you have packages installed from the "free" channel, but that channel has been removed. So conda is confused about what to do. You should read the blog post anaconda.com/why-we-removed-the-free-channel-in-conda-4-7 and temporarily add the "free" channel back to your configuration as described in that blog by running the command conda config --set restore_free_channel true. After you run that command, you can set the restore free channel back to false if you finished installing the Cython. Thanks for the comments of #darthbith