How to instruct pip to only install dependencies that are newer than the presently installed versions? - pip

I realize that a requirements.txt file can be used to pin the versions used in pip install. Sometimes I don't want to go through all that- and just simply want to protect my installation from downgrades. Is there any way to instruct pip install to do that?
An example: I just installed librosa and it downgraded numpy from 1.24.1 to 1.23.5 . I don't want that behavior to happen unless I explicitly request. On the other hand if there are missing dependencies then please let's grab them.
For this installation of python it is acceptable to take the risk of occasionally ending up with a mismatch due to installing newer versions [ but I don't want older ones].

Related

Trying to Avoid Using Two Package Managers (pip and Poetry) for the Same Project

After a fair bit of thrashing, I successfully installed the Python Camelot PDF table extraction tool (https://pypi.org/project/camelot-py/) and it works for the intended purpose. But in order to get it to work, aside from having to correct a deprecated dependency (by editing pyproject.toml and setting PyPDF2 =”2.12.1”) I used pip to install Camelot from within a Poetry (my preferred package manager) environment- because I haven’t yet figured out any other way.
Since I’m very new to Python and package management (but not to programming) I have some holes in my basic understanding that I need to patch up. I thought that using two package managers on the same project in principle defeats the purpose of using package managers, so I feel like I’m lucky that it works. Would love some input on what I’m missing.
The documentation for Camelot provides instructions for installing via pip and conda (https://camelot-py.readthedocs.io/en/master/user/install-deps.html), but not Poetry. As I understand (or misunderstand) it, packages are added to Poetry environments via the pyproject.toml file and then calling "poetry install."
I updated pyrpoject.toml as follows, having identified the current Camelot version as 0.10.1 (camelot --version):
[tool.poetry.dependencies]
python = "^3.8"
PyPDF2 = "2.12.1"
camelot = "^0.9.0"
This led to the error:
Because camelot3 depends on camelot (^0.9.0) which doesn't match any versions, version solving failed.
Same problem if I set (camelot = "0.10.1"). So I took the Camelot reference out of pyproject.toml, and ran the following command from within my Poetry virtual environment:
pip install “camelot-py[base]”
I was able to successfully proceed from here, but that doesn’t feel right. Is it wrong to try to force this project into Poetry, and should I instead consider using different package managers for different projects? Am I misunderstanding how Poetry works? What else am I missing here?
Whenever you see pip install 'Something[extra]' you can replace it with poetry add 'Something[extra]'.
Alternatively you can write it directly in the pyproject.toml and then run poetry install instead:
[tool.poetry.dependencies]
# ...
Something = { extras = ["extra"] }
Note that in your question you wrote camelot in the pyproject.toml but it is camelot-py that you should have written.

Can't install recent version of conda-forge package

I maintain a conda-forge package called switch_model. Subsequent to our last release (2.0.5), one of the packages we depend on has made an incompatible change. So I am trying to publish a post-release, 2.0.5.post2, that requires an older version of that package.
I've managed to create the post-release on PyPi and I can install successfully with pip. I also updated my meta.yaml for the recipe and pushed that to github (https://github.com/conda-forge/switch_model-feedstock/blob/master/recipe/meta.yaml).
Now, the conda-forge website at https://anaconda.org/conda-forge/switch_model identifies the latest version as 2.0.5.post2. But when I try to install to my computer using conda install -c conda-forge switch_model, it says it will install the older 2.0.5 version. If I try conda install -c conda-forge switch_model=2.0.5.post2, I get a message that it cannot be found. However, if I use conda install -c conda-forge/label/main switch_model, it installs the latest version (2.0.5.post2).
So as things stand, the new version is on conda-forge, but people who try to install my package will still get the old version with the wrong dependencies, and it won't work.
Does anyone know how to get conda to automatically install the post-release version? It's possible that I needed to fork the switch_model-feedstock repository into my personal account on github, then do a pull request back to the conda-forge account. But I'm not sure if that would have made a difference (I don't think I did that for the original 2.0.5 version), and I'm not sure how I would do it retroactively, since I've already pushed the new version of meta.yaml into the conda-forge version of the repository.
Update
By the time I finished writing this question, the 2.0.5.post2 version is now installing by default. So I may have just needed to wait until something happened in the delivery system. So my question now is, is there anything I could have done to test that the new version of the package would soon be available to users (e.g., clear some cache of available versions)? Would it make a difference if I updated the package via a pull request from another repository instead of pushing directly to the conda-forge version?
By the time I finished writing this question, the 2.0.5.post2 version is now installing by default. So I may have just needed to wait until something happened in the delivery system.
Packages can take some time (~1 hour) to actually be installable through conda, even if they appear on anaconda.org.
So my question now is, is there anything I could have done to test that the new version of the package would soon be available to users (e.g., clear some cache of available versions)?
Not completely sure what's being asked here
If you're asking if you can force your users to update their version, no.
If you wish to ensure the build is not broken, you can run its tests during the build process: https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#test-commands
If your concern is the discrepancy between it seemingly being online and Anaconda's servers actually delivering it to users via conda, not really. Once a build passes all of the status checks that conda-forge's bots do, there are vanishingly few reasons why it would fail to be available to users later.
Would it make a difference if I updated the package via a pull request from another repository instead of pushing directly to the conda-forge version?
No, and in general it's best to, wherever possible, to stay within the infrastructure that conda-forge has already built out.

download with pip3 packages for another python release

I own an off grid server (linux x86_64 with python3.8) on which I would like to install pypi packages. Up to now, I download manually each package and dependencies from pypi.org. And it is painful.
I would like to use pip3 download feature which is perfect for my needs ! Except the platform I download from is very different (win_amd64 with python3.6). Since I want to get numpy-1.18.5-cp38-cp38-manylinux1_x86_64.whl, I tried :
pip3 download --verbose --only-binary=:all: --abi=cp38 --platform=manylinux1_x86_64 numpy
And, checking the verbose log, for the line which correspond to the expected wheel, I get a it is not compatible with this Python... Of course it isn't !
How could I force pip3 to download the appropriate .whl file even if it is not the correct one for the platform I download it from ?
If you're passing --abi, you have to also pass the --python-version so pip can build the correct platform tag. This should work (untested):
$ pip3 download --only-binary=:all: --python-version=38 --abi=cp38 --platform=manylinux1_x86_64 numpy
Although admitted, pip download is anything but reliable, e.g. when it comes to resolving environment markers and stuff.

Install things on Pepper

How would I install things on Pepper, since I don't know what package manager it uses. I usually use apt on my Ubuntu machine and want to install some packages on Pepper. I'm not sure what package manager Pepper has (if any) and want to install some packages, but also only know the name of the package using apt (not sure if the package name is the same on other package managers). And if possible, would I be able to install apt on Pepper. Thanks.
Note: From the research I've done, Pepper is using NaoQi which is based off Gentoo which uses portage.
You don't have root access on Pepper, which limits what you can install (and apt isn't on the robot anyway).
Some possibilities:
Include your content in Choregraphe projects - when you install a package, the whole directory structure is installed (more exactly, what's listed in the .pml); so you can put arbitrary files on your robot, and you can usually include whatever dependencies your code needs.
Install python packages with pip.
In NAOqi 2.5, a slightly older version of pip is installed that will not always work out of the box; I recommend upgrading it:
pip install --user --upgrade pip
... you can then use the upgraded pip to install other packages, using the upgraded pip, and always --user:
/home/nao/.local/bin/pip install --user whatever-package-you-need
Note however that if you do this and use your packages in your code running on Pepper, that code won't work on other robots until you do pip on them, which is why I usually only do this for tests; for production code I prefer packaging all dependencies in my app's package.
As a workaround if you need to install software (or just newer versions of software) using Gentoo Prefix is an option.
Gentoo Prefix builds a Gentoo OS on any location (no need of root, can be any folder). It includes it's own portage (package manager) to install new software.
I maintain a few projects to work with Pepper and use "any" software I want. Note that they are built for 64b (amd64) and 32b (x86) even though for Pepper only the 32b matter.
gentoo_prefix_ci and gentoo_prefix_ci_32b Which builds nightly the bootstrap of the Gentoo Prefix system. This is a process that takes a while to compile (3-6h depending on your machine) and that breaks from time to time (as upstream packages are updated and bugs are found, Gentoo is a rolling release distribution). Every night updated binary images ready to use can be found in the Releases section.
For ROS users that want to run it on the robot, based on the previous work, I maintain also ros_overlay_on_gentoo_prefix and ros_overlay_on_gentoo_prefix_32b. They provide nightly builds with binary releases of ROS Kinetic and ROS Melodic over Gentoo Prefix using ros-overlay. You can find ready-to-use 'ros_base' and 'desktop' releases.
For purposes related to the RoboCup#Home Social Standard Platform League where the Pepper robot is used I also maintain a specific build that contains a lot of additional software. This project is called pepper_os and it builds 270+ ROS packages, a lot of Python packages (250+ including Theano, dlib, Tensorflow, numpy...) and all the necessary dependencies for these to build (750+ packages). Note that the base image (it's built with Docker) is the actual Pepper 2.5.5.5 image, so it can be used for debugging as it if it was in the real robot (although without sensors and such).
Maybe this approach, or these projects, are useful.
The package manager on pepper is disabled. But you can copy the files to the robot and write your own service that imports any package you might need.
As a supplement on importing:
http://www.about-robots.com/how-to-import-python-files-in-your-pepper-apps.html
To get rid of error :
" SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
".
If you use python and requests package, just add verify=False at the end of your parameters.
r=requests.get(URL,params,header,verify=False)
Works with my Pepper
To get rid of
InsecurePlatformWarning: A true SSLContext object is not available.
install
/home/nao/.local/bin/pip install --user requests[security]
To get rid of:
CryptographyDeprecationWarning: Support for your Python version is deprecated.
install
/home/nao/.local/bin/pip install --user cryptography==2.2.2
If it based on Gentoo maybe we could try to install portage with pip.
pip install portage
Just a thought.

installing darcsden

After making cabal install of the darcsden code I get this message:
cabal: The following packages are likely to be broken by the reinstalls:
bin-package-db-0.0.0.0
ghc-7.4.1
Use --force-reinstalls if you want to install anyway.
How do I get around this? What does it mean?
Why does it happen?
If you look at the full output of cabal install darcsden, you will find several lines that look like this:
binary-0.5.1.0 -bytestring-in-base (reinstall) changes: array-0.4.0.0 ->
0.3.0.3, containers-0.4.2.1 -> 0.4.1.0
This means that cabal has found an install plan that involves (destructively) reinstalling packages that you already have on your system.
Now, GHC packages are rather sensitive when it comes to their (reflexive) dependencies, and generally only work if exactly the right version of all dependencies is available, compiled against the right versions of their dependencies and so on. Therefore, replacing an already installed package with a new version of changed dependencies can cause some packages on your system to become unusable. Since version 0.14.0, cabal warns you about such a situation in advance to prevent you from accidentally breaking your system.
In your case, ghc and bin-package-db are among the potentially broken packages, because they depend on binary which gets reinstalled. So you should not try to use the --force-reinstalls flag, because it might really break your GHC.
What can you do?
If you scan what is going to be reinstalled, you see that quite a few dependencies are downgraded. This hints at the fact that the package you are trying to install might not be properly updated to GHC 7.4.1 yet.
You can in general try to call cabal install darcsden --avoid-reinstalls to explicitly try to find an install plan that has no reinstalls. Unfortunately, in this case, it fails (for me).
I've briefly looked at the darcsden package description, but it looks like quite a few dependencies of darcsden need to be updated. So the remaining options are: Convince the author(s) of darcsden to release an updated version, or install darcsden using an older version of GHC (such as 7.0.4), which should just work.

Resources