KeyError 'None' while running poetry lock - python-poetry

I'm facing the following error while running poetry lock:
Updating dependencies
Resolving dependencies... (9.9s)
KeyError
'None'
at ~/.poetry/lib/poetry/mixology/version_solver.py:112 in _propagate
108│ # Iterate in reverse because conflict resolution tends to produce more
109│ # general incompatibilities as time goes on. If we look at those first,
110│ # we can derive stronger assignments sooner and more eagerly find
111│ # conflicts.
→ 112│ for incompatibility in reversed(self._incompatibilities[package]):
113│ result = self._propagate_incompatibility(incompatibility)
114│
115│ if result is _conflict:
116│ # If the incompatibility is satisfied by the solution, we use
Python & Poetry versions:
% python --version
Python 3.8.0
% poetry --version
Poetry version 1.1.4
Any suggestions?

Related

EAS build gives: Invalid `RNGestureHandler.podspec` file: undefined method `exists?'

Recently, when I yarn eas build my expo project I have started failing with
[INSTALL_PODS] Using Expo modules
[INSTALL_PODS] [Expo] Enabling modular headers for pod ExpoModulesCore
...
[INSTALL_PODS] [!] Invalid `Podfile` file:
[INSTALL_PODS] [!] Invalid `RNGestureHandler.podspec` file: undefined method `exists?' for File:Class.
...
[INSTALL_PODS] # -------------------------------------------
[INSTALL_PODS] #
[INSTALL_PODS] > isUserApp = File.exists?(File.join(__dir__, "..", "..", "node_modules", "react-native", "package.json"))
[INSTALL_PODS] # if isUserApp
[INSTALL_PODS] # -------------------------------------------
I don't build locally often (remote builds on the Expo servers do fine) so any number of things might have triggered this over the past several weeks, including a migration from an Intel MBP to an M2 MBA, but I wonder if there's an obvious reason that someone has experience with. The error suggests there's something wrong with the podfile's use of an undefined method. But the suggestions for addressing this I've found online involve all kinds of tweaking that are way beyond what I'm familiar with. My experience with Expo/EAS just been to be sure to
brew install cocoapods fastlane
and don't involve much more than that. Ideally I'd like to avoid messing with special gem installations of the sort suggested as quick fixes.
So the question is: is this indeed just a bug in a podfile (use of a deprecated method) that will eventually get fixed?
UPDATE: Broadly it seems that the answer is "yes": this does get fixed in later versions of the affected packages, but those packages are not officially compatible with Expo. If I update them to versions that allow building, then I get warnings:
[RUN_EXPO_DOCTOR] [16:17:37] Some dependencies are incompatible with the installed expo package version:
[RUN_EXPO_DOCTOR] [16:17:37] - react-native-gesture-handler - expected version: ~2.8.0 - actual version installed: 2.9.0
[RUN_EXPO_DOCTOR] [16:17:37] - react-native-reanimated - expected version: ~2.12.0 - actual version installed: 2.14.4
so the question becomes: when will Expo officially support package versions required to successfully build?
Ruby 3.2.0 removed File.exists?.
This issue was reported on the expo repo on GitHub.
The recommended fix is to upgrade to expo#47.0.13.

why poetry removes virtualenv?

my package doesn't requires virtualenv directly, some 3rd package does. However, when running test in tox, poetry install -E test -vvv alwasy fail due to:
poetry removes virtualenv first, which is created by tox
then it tries remove other parts and failed, due to virtualenv is removed, some packages cannot found.
the tox.ini:
[testenv]
skip_install = true
deps = poetry
commands =
poetry install -E test -vvv
the errors:
Project environment contains an empty path in sys_path, ignoring.
Installing dependencies from lock file
Finding the necessary packages for the current system
Package operations: 73 installs, 1 update, 16 removals, 68 skipped
• Removing virtualenv (20.16.3): Pending...
• Removing virtualenv (20.16.3): Removing...
• Removing virtualenv (20.16.3)
• Removing webencodings (0.5.1): Pending...
• Removing webencodings (0.5.1): Removing...
• Removing webencodings (0.5.1): Failed
Command '['/apps/backtest/.tox/py38/bin/python', '/apps/backtest/.tox/py38/lib/python3.8/site-packages/virtualenv/seed/wheels/embed/pip-22.2.2-py3-none-any.whl/pip', 'uninstall', 'webencodings', '-y']' returned non-zero exit status 2.
Command ['/apps/backtest/.tox/py38/bin/python', '/apps/backtest/.tox/py38/lib/python3.8/site-packages/virtualenv/seed/wheels/embed/pip-22.2.2-py3-none-any.whl/pip', 'uninstall', 'zipp', '-y'] errored with the following return code 2, and output:
/apps/backtest/.tox/py38/bin/python: can't open file '/apps/backtest/.tox/py38/lib/python3.8/site-packages/virtualenv/seed/wheels/embed/pip-22.2.2-py3-none-any.whl/pip': [Errno 2] No such file or directory
of course pip doesn't exist since it belongs virtualenv and has been removed.
the question is:
how to find which 3rd packages requires virtualenv?
how to disallow poetry to remove virtualenv (it does this for install it later) if I can't remove dependency to virtualenv?
You are mixing two different installation concepts and the second overrides the first.
deps = poetry
This installs poetry (and it's dependencies including virtualenv) into into the virtualenvironment created by tox. The deps section is a tox concept that installs packages required for testing other than the package installation itself.
Then the commands run.
poetry install -E test -vvv
The poetry command will detect it is inside a virtualenv and then install the dependencies into that virtualenv, but also cleaning up unnecessary packages for your package. Thus, poetry is overriding it's own dependencies. Causing the errors you're encountering.
Solution is documented here. Usecase 1 does the trick for me.
You would need to include the pyproject.toml into your answer as that would be necessary for me to identify any erroneous setup there.

Version of a built `conda-forge` package is different between `pip list` and the `conda list` (it should be the same)

I recently added the package typepigeon to conda-forge. On conda-forge it is currently at version 1.0.9; however, when installing typepigeon via conda install, the output of pip list shows its version to be 0.0.0.post2.dev0+a27ab2a instead of 1.0.9.
conda list:
typepigeon 1.0.9 pyhd8ed1ab_0 conda-forge
pip list:
typepigeon 0.0.0.post2.dev0+a27ab2a
I think the issue arises from the way I am assigning the version (I am using dunamai to extract the Git tag as the version number). This version extraction is done within setup.py of typepigeon.
try:
__version__ = Version.from_any_vcs().serialize()
except RuntimeError as error:
warnings.warn(f'{error.__class__.__name__} - {error}')
__version__ = '0.0.0'
When conda-forge builds the feedstock, I think it might be looking at the Git tag of the feedstock repository instead of the version from PyPI (as it is locally executing setup.py).
How can I modify the Conda Forge recipe to force the PyPI version?
I've figured out a solution; it might not be the best possible way to do this, but it works for my workflow.
I injected the version into the setup.py by looking for an environment variable (that I called __version__):
if '__version__' in os.environ:
__version__ = os.environ['__version__']
else:
from dunamai import Version
try:
__version__ = Version.from_any_vcs().serialize()
except RuntimeError as error:
warnings.warn(f'{error.__class__.__name__} - {error}')
__version__ = '0.0.0'
Then, in the conda-forge recipe, I added an environment variable (__version__) to the build step:
build:
noarch: python
script: export __version__={{ version }} && {{ PYTHON }} -m pip install . -vv

Beam Dataflow job stuck after upgrading the Apache Beam version from 2.27.0 to 2.32.0

Currently I am in a process of upgrading the Apache Beam version from 2.27.0 to 2.32.0 but when I start my jobs on Dataflow runner the job stucks during the worker-startup and it never finish installing dependencies. The python version is 3.7
This is what I see in the logs
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime
After initial analysis it looks like this is the issue with pip dependencies backtracking and it keeps on downloading and installing dependencies. These are some warnings in the logs
INFO: pip is looking at multiple versions of google-auth to determine which version is compatible with other requirements.
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead:
This is the setup.py for the Beam
import setuptools
REQUIRED_PACKAGES = [
"numpy==1.21.4",
"pandas==0.25.3",
"dateparser==1.1.0",
"python-dateutil==2.8.2",
"pytz==2021.3",
"google-api-core==1.14.0",
"google-cloud-storage==1.36.1",
"fastavro==0.22.10",
]
setuptools.setup(
name="data-workflows",
version="0.1.0",
install_requires=REQUIRED_PACKAGES,
packages=setuptools.find_packages(),
)
The pipelines used to run fine in the Beam version 2.27.0. I am not sure if these warnings are the cause of the issue. Could someone please help me to identify the root cause of this problem?

"Placeholder too short" error during anaconda installation of ncurses

I'm trying to install rpy2 with anaconda using:
conda install -c https://conda.anaconda.org/r rpy2
While conda is updating dependencies and linking packages, it stops with this error:
Linking packages ...
Error: ERROR: placeholder '/root/miniconda3/envs/_build_placehold_placehold_placehold_placehold_placehold_p' too short in: ncurses-5.9-4
Here's info for the installation.
Current conda install:
platform : linux-64
conda version : 3.18.2
conda-build version : 1.14.1
python version : 2.7.10.final.0
requests version : 2.8.0
Does anyone know what this error means and how to resolve it?
When Conda installs files, some of them have the build prefix in them. That's the placeholder you see. We have to change that before packages will work on your system. That's "relocatability." The prefix that you are trying to install to is longer than the prefix that the package was built with. We can replace longer strings with shorter strings in the replacement, but not vice versa.
We have increased the path length of the build prefix in Conda-Build 2.0.0, which is in beta right now. Once people begin using this, these problems should go away. However, it will only be truly effective by rebuilding all packages that have binary-embedded prefixes. This will take quite a while.
TLDR: try to install to a shorter folder path, if at all possible.

Resources