Just tried to rebuild a container with DRF and drf-yasg. The exact same commit was passing all tests fine but is now falling over with the following exception:
ImportError: Could not import 'rest_framework.schemas.coreapi.AutoSchema' for API setting 'DEFAULT_SCHEMA_CLASS'. ModuleNotFoundError: No module named 'rest_framework.schemas.coreapi'.
Nothing else has changed, but it seems a newer package may have been included that broke the Swagger generator.
Anyone else experience similar?
So it seems pip was pulling DRF V3.10, which has some switch from CoreAPI to OpenAPI: https://www.django-rest-framework.org/community/3.10-announcement/ . Adding the line from the release documentation:
REST_FRAMEWORK = {
...
'DEFAULT_SCHEMA_CLASS': 'rest_framework.schemas.coreapi.AutoSchema'
}
did not seem to make any difference.
I would presume your dependencies in requirements.txt are not specific enough, and rebuilding the container has installed a later version of djangorestframework.
Check for a line in your pipfile like djangorestframework>=3.9, and this should be changed to either pin a specific version djangorestframework==3.9, or pin it to a specific minor release so you will still receive bug fixes and security updates djangorestframework>=3.9,<3.10.
These lines can also be used directly with pip, incase your container build uses pip directly, e.g. pip install "djangorestframework>=3.9,<3.10"
It seems that installing coreapi seperately may help: pip install coreapi
pip3 install packaging
solve it!
Related
After a fair bit of thrashing, I successfully installed the Python Camelot PDF table extraction tool (https://pypi.org/project/camelot-py/) and it works for the intended purpose. But in order to get it to work, aside from having to correct a deprecated dependency (by editing pyproject.toml and setting PyPDF2 =”2.12.1”) I used pip to install Camelot from within a Poetry (my preferred package manager) environment- because I haven’t yet figured out any other way.
Since I’m very new to Python and package management (but not to programming) I have some holes in my basic understanding that I need to patch up. I thought that using two package managers on the same project in principle defeats the purpose of using package managers, so I feel like I’m lucky that it works. Would love some input on what I’m missing.
The documentation for Camelot provides instructions for installing via pip and conda (https://camelot-py.readthedocs.io/en/master/user/install-deps.html), but not Poetry. As I understand (or misunderstand) it, packages are added to Poetry environments via the pyproject.toml file and then calling "poetry install."
I updated pyrpoject.toml as follows, having identified the current Camelot version as 0.10.1 (camelot --version):
[tool.poetry.dependencies]
python = "^3.8"
PyPDF2 = "2.12.1"
camelot = "^0.9.0"
This led to the error:
Because camelot3 depends on camelot (^0.9.0) which doesn't match any versions, version solving failed.
Same problem if I set (camelot = "0.10.1"). So I took the Camelot reference out of pyproject.toml, and ran the following command from within my Poetry virtual environment:
pip install “camelot-py[base]”
I was able to successfully proceed from here, but that doesn’t feel right. Is it wrong to try to force this project into Poetry, and should I instead consider using different package managers for different projects? Am I misunderstanding how Poetry works? What else am I missing here?
Whenever you see pip install 'Something[extra]' you can replace it with poetry add 'Something[extra]'.
Alternatively you can write it directly in the pyproject.toml and then run poetry install instead:
[tool.poetry.dependencies]
# ...
Something = { extras = ["extra"] }
Note that in your question you wrote camelot in the pyproject.toml but it is camelot-py that you should have written.
I maintain a conda-forge package called switch_model. Subsequent to our last release (2.0.5), one of the packages we depend on has made an incompatible change. So I am trying to publish a post-release, 2.0.5.post2, that requires an older version of that package.
I've managed to create the post-release on PyPi and I can install successfully with pip. I also updated my meta.yaml for the recipe and pushed that to github (https://github.com/conda-forge/switch_model-feedstock/blob/master/recipe/meta.yaml).
Now, the conda-forge website at https://anaconda.org/conda-forge/switch_model identifies the latest version as 2.0.5.post2. But when I try to install to my computer using conda install -c conda-forge switch_model, it says it will install the older 2.0.5 version. If I try conda install -c conda-forge switch_model=2.0.5.post2, I get a message that it cannot be found. However, if I use conda install -c conda-forge/label/main switch_model, it installs the latest version (2.0.5.post2).
So as things stand, the new version is on conda-forge, but people who try to install my package will still get the old version with the wrong dependencies, and it won't work.
Does anyone know how to get conda to automatically install the post-release version? It's possible that I needed to fork the switch_model-feedstock repository into my personal account on github, then do a pull request back to the conda-forge account. But I'm not sure if that would have made a difference (I don't think I did that for the original 2.0.5 version), and I'm not sure how I would do it retroactively, since I've already pushed the new version of meta.yaml into the conda-forge version of the repository.
Update
By the time I finished writing this question, the 2.0.5.post2 version is now installing by default. So I may have just needed to wait until something happened in the delivery system. So my question now is, is there anything I could have done to test that the new version of the package would soon be available to users (e.g., clear some cache of available versions)? Would it make a difference if I updated the package via a pull request from another repository instead of pushing directly to the conda-forge version?
By the time I finished writing this question, the 2.0.5.post2 version is now installing by default. So I may have just needed to wait until something happened in the delivery system.
Packages can take some time (~1 hour) to actually be installable through conda, even if they appear on anaconda.org.
So my question now is, is there anything I could have done to test that the new version of the package would soon be available to users (e.g., clear some cache of available versions)?
Not completely sure what's being asked here
If you're asking if you can force your users to update their version, no.
If you wish to ensure the build is not broken, you can run its tests during the build process: https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#test-commands
If your concern is the discrepancy between it seemingly being online and Anaconda's servers actually delivering it to users via conda, not really. Once a build passes all of the status checks that conda-forge's bots do, there are vanishingly few reasons why it would fail to be available to users later.
Would it make a difference if I updated the package via a pull request from another repository instead of pushing directly to the conda-forge version?
No, and in general it's best to, wherever possible, to stay within the infrastructure that conda-forge has already built out.
I have used a YAML file and have imported PyYAML into my project.
The code works fine in PyCharm, however on creation of an egg and running the egg gives an error as module not found on command prompt.
You have not provided quite enough information for an exact answer, but, for missing python modules, simply run
py -m pip install PyYaml
or, in some cases
python pip install PyYaml
You may have imported it in your project (on PyCharm) but you have to make sure it is installed and imported outside of the IDE, and on your system, where the python interpreter runs it
I have not made an .egg for some time (you really should be consider using wheels for distributing packages), but IIRC an .egg should have a requires.txt file with an entry that specifies dependency on pyyaml.
You normally get that when setup() in your setup.py has an argument install_requires:
setup(
...
install_requires=['pyyaml<4']
...
)
(PyYAML 4.1 was retracted because there were problems with that version, but it might be in your local cache of PyPI as it was in my case, hence the <4, which restricts installation to the latest 3.x release)
I am using qpython as a non-root user and I have googled it up but all recommendations don't work both manually and using pip...I keep on getting errors...
I get erors when I use both:
pip install requests from pip console
and:
import pip
pip.main(['install','requests']) on python console
The error is something like:
cannot fetch base url https://pypi.python.org/simple/
could not find any downloads that satisfy the condition requests
...
if there is a workaround or a fix I would be happy to accept...
Did you use the newest version(>=2.0.7) Installing requests from QPYPI works well in the newest version. https://github.com/qpython-android/qpython/releases
Yes! This fixed my problem once I used the beta v2.1 from
https://github.com/qpython-android/qpython/releases
Google play did not give me the latest version (I had 1.xx)
I was able to use QPYPY to install requests and it automatically installed the required library urllib3.
I am on windows 64bit, I have installed anaconda, and managed to create an environment with python 2.7
I have numpy, pylearn2, theano, and every package is built properly
I have been able to import all these modules, however I get some very esoteric messages when I try to complete the model, like
ImportError: Could not import pylearn2.models.softmax_regression but could import pylearn2.models. Original exception: No module named dnn
Then I tried to actually find the package in the installation, but inside the cuda folder, there is no module named dnn. Looking at github, I see that it should be there.
Why is theano missing modules? I installed using conda install theano, and it gave some suggestions, I have managed to pick the correct one.
I have uninstalled and installed theano many time, I can import it but I can never get the proper modules.
What is going wrong?
Ok, after a few days of search, it seems like Theano installed from anaconda is missing a lot of modules. However, installing theano by cloning the repository with
pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git
seems to resolve the issue. Since windows normally does not have git, it can be easily installed (seems to take care of the environment's path variable) from here
https://git-scm.com/download/win