Can't install recent version of conda-forge package - anaconda

I maintain a conda-forge package called switch_model. Subsequent to our last release (2.0.5), one of the packages we depend on has made an incompatible change. So I am trying to publish a post-release, 2.0.5.post2, that requires an older version of that package.
I've managed to create the post-release on PyPi and I can install successfully with pip. I also updated my meta.yaml for the recipe and pushed that to github (https://github.com/conda-forge/switch_model-feedstock/blob/master/recipe/meta.yaml).
Now, the conda-forge website at https://anaconda.org/conda-forge/switch_model identifies the latest version as 2.0.5.post2. But when I try to install to my computer using conda install -c conda-forge switch_model, it says it will install the older 2.0.5 version. If I try conda install -c conda-forge switch_model=2.0.5.post2, I get a message that it cannot be found. However, if I use conda install -c conda-forge/label/main switch_model, it installs the latest version (2.0.5.post2).
So as things stand, the new version is on conda-forge, but people who try to install my package will still get the old version with the wrong dependencies, and it won't work.
Does anyone know how to get conda to automatically install the post-release version? It's possible that I needed to fork the switch_model-feedstock repository into my personal account on github, then do a pull request back to the conda-forge account. But I'm not sure if that would have made a difference (I don't think I did that for the original 2.0.5 version), and I'm not sure how I would do it retroactively, since I've already pushed the new version of meta.yaml into the conda-forge version of the repository.
Update
By the time I finished writing this question, the 2.0.5.post2 version is now installing by default. So I may have just needed to wait until something happened in the delivery system. So my question now is, is there anything I could have done to test that the new version of the package would soon be available to users (e.g., clear some cache of available versions)? Would it make a difference if I updated the package via a pull request from another repository instead of pushing directly to the conda-forge version?

By the time I finished writing this question, the 2.0.5.post2 version is now installing by default. So I may have just needed to wait until something happened in the delivery system.
Packages can take some time (~1 hour) to actually be installable through conda, even if they appear on anaconda.org.
So my question now is, is there anything I could have done to test that the new version of the package would soon be available to users (e.g., clear some cache of available versions)?
Not completely sure what's being asked here
If you're asking if you can force your users to update their version, no.
If you wish to ensure the build is not broken, you can run its tests during the build process: https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#test-commands
If your concern is the discrepancy between it seemingly being online and Anaconda's servers actually delivering it to users via conda, not really. Once a build passes all of the status checks that conda-forge's bots do, there are vanishingly few reasons why it would fail to be available to users later.
Would it make a difference if I updated the package via a pull request from another repository instead of pushing directly to the conda-forge version?
No, and in general it's best to, wherever possible, to stay within the infrastructure that conda-forge has already built out.

Related

How to instruct pip to only install dependencies that are newer than the presently installed versions?

I realize that a requirements.txt file can be used to pin the versions used in pip install. Sometimes I don't want to go through all that- and just simply want to protect my installation from downgrades. Is there any way to instruct pip install to do that?
An example: I just installed librosa and it downgraded numpy from 1.24.1 to 1.23.5 . I don't want that behavior to happen unless I explicitly request. On the other hand if there are missing dependencies then please let's grab them.
For this installation of python it is acceptable to take the risk of occasionally ending up with a mismatch due to installing newer versions [ but I don't want older ones].

Install package with pip that has additional qualifiers

Our CI pipeline publishes wheels for branches that have a version of
<base version>-dev<timestamp>+<branch name>.p<pipeline id>
So if I am working on cool-stuff on in the xyzzy branch, it might upload a wheel for version 1.2.3.dev202211221111+xyzzy.p1234
Somebody else, working in the foobar branch, might cause 1.2.3.dev202211221115+foobar.p1235 to be created.
How can I get pip to install the latest version from the xyzzy branch? I tried pip install cool-stuff>1.2.3.dev*+xyzzy but it complained that it could not find a matching version (even though the available versions that it listed included a +xyzzy tag.)
pip install cool-stuff==1.2.3.dev202211221111+xyzzy.p1234 did work, but I would prefer not have have to update the time stamp and pipeline number each time. I am hoping to put cool-stuff >= <magic> in my config file and just run pip install -e . whenever I need new dependencies.
What format do I need to use here?
As far as I know, it is not possible. I can not think of a workable solution that would be based on the version string only. The "local" part of the version string (in other words the part after the plus sign +) can not be used to differentiate between two releases as you intend to.
If I were in your situation, I think I would investigate a solution where the CI/CD pipelines generate distributions with a name customized according to the git branch. For example in your case the pipelines should generate wheels for Library-foobar or Library-xyzzy, depending on what branch is currently being worked on (while still keeping the same top-level import names of course). This assumes that you can customize your pipelines and processes deeply enough to support such a workflow.

Access old pytorch release from conda cloud

According to multiple sites, there was a binary release 0.2.1 for pytorch in repo peterjc123 (e.g. https://moodle.di.ens.fr/mod/forum/discuss.php?d=9#p33 ).
I also see a release 0.3.0 when looking at the only snapshot from archive.org.
However, the conda cloud website only shows the latest version (0.3.1; https://anaconda.org/peterjc123/pytorch/files ); the same applies to
conda search pytorch -c peterjc123
The old download links do not work anymore.
How do I access the old version (I need a binary < 0.3 for windows 10; cuda80; py36)?
Consider using the binaries uplodaed by user Soumith, who is also the uploader for the (now stable branch) pytorch/pytorch on Anaconda cloud.
This channel has versions back to 0.2.1, so it should be satisfying your requirements.
If that does not work, also consider installing with regular pip install and specifying a version of your desire.
Edit: There might also be older versions for pytorch_cpu and other packages, I only checked for the "main" pytorch package.

Install things on Pepper

How would I install things on Pepper, since I don't know what package manager it uses. I usually use apt on my Ubuntu machine and want to install some packages on Pepper. I'm not sure what package manager Pepper has (if any) and want to install some packages, but also only know the name of the package using apt (not sure if the package name is the same on other package managers). And if possible, would I be able to install apt on Pepper. Thanks.
Note: From the research I've done, Pepper is using NaoQi which is based off Gentoo which uses portage.
You don't have root access on Pepper, which limits what you can install (and apt isn't on the robot anyway).
Some possibilities:
Include your content in Choregraphe projects - when you install a package, the whole directory structure is installed (more exactly, what's listed in the .pml); so you can put arbitrary files on your robot, and you can usually include whatever dependencies your code needs.
Install python packages with pip.
In NAOqi 2.5, a slightly older version of pip is installed that will not always work out of the box; I recommend upgrading it:
pip install --user --upgrade pip
... you can then use the upgraded pip to install other packages, using the upgraded pip, and always --user:
/home/nao/.local/bin/pip install --user whatever-package-you-need
Note however that if you do this and use your packages in your code running on Pepper, that code won't work on other robots until you do pip on them, which is why I usually only do this for tests; for production code I prefer packaging all dependencies in my app's package.
As a workaround if you need to install software (or just newer versions of software) using Gentoo Prefix is an option.
Gentoo Prefix builds a Gentoo OS on any location (no need of root, can be any folder). It includes it's own portage (package manager) to install new software.
I maintain a few projects to work with Pepper and use "any" software I want. Note that they are built for 64b (amd64) and 32b (x86) even though for Pepper only the 32b matter.
gentoo_prefix_ci and gentoo_prefix_ci_32b Which builds nightly the bootstrap of the Gentoo Prefix system. This is a process that takes a while to compile (3-6h depending on your machine) and that breaks from time to time (as upstream packages are updated and bugs are found, Gentoo is a rolling release distribution). Every night updated binary images ready to use can be found in the Releases section.
For ROS users that want to run it on the robot, based on the previous work, I maintain also ros_overlay_on_gentoo_prefix and ros_overlay_on_gentoo_prefix_32b. They provide nightly builds with binary releases of ROS Kinetic and ROS Melodic over Gentoo Prefix using ros-overlay. You can find ready-to-use 'ros_base' and 'desktop' releases.
For purposes related to the RoboCup#Home Social Standard Platform League where the Pepper robot is used I also maintain a specific build that contains a lot of additional software. This project is called pepper_os and it builds 270+ ROS packages, a lot of Python packages (250+ including Theano, dlib, Tensorflow, numpy...) and all the necessary dependencies for these to build (750+ packages). Note that the base image (it's built with Docker) is the actual Pepper 2.5.5.5 image, so it can be used for debugging as it if it was in the real robot (although without sensors and such).
Maybe this approach, or these projects, are useful.
The package manager on pepper is disabled. But you can copy the files to the robot and write your own service that imports any package you might need.
As a supplement on importing:
http://www.about-robots.com/how-to-import-python-files-in-your-pepper-apps.html
To get rid of error :
" SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
".
If you use python and requests package, just add verify=False at the end of your parameters.
r=requests.get(URL,params,header,verify=False)
Works with my Pepper
To get rid of
InsecurePlatformWarning: A true SSLContext object is not available.
install
/home/nao/.local/bin/pip install --user requests[security]
To get rid of:
CryptographyDeprecationWarning: Support for your Python version is deprecated.
install
/home/nao/.local/bin/pip install --user cryptography==2.2.2
If it based on Gentoo maybe we could try to install portage with pip.
pip install portage
Just a thought.

standardized conclusion required for rpm upgrade process

The rpm command provides three main operations for upgrading and installing packages:
Upgrade
An upgrade operation means installing a new version of a package and removing all previous versions of the same package. If you have not installed a package previously, the upgrade operation will install the package.
Freshen
A freshen operation means to install a new version of a package only if you have already installed another version of the package.
Install
An install operation installs a package for the first time. It also, through special command-line parameters, allows you to install multiple versions of a package, usually not what we want. So, in the vast majority of cases, you want to run the upgrade operation for all package installations.
Should normally install packages with rpm -U, not rpm -i. One of the main reasons is that rpm -i allows you to install multiple instances of the same (identical) package.
Is this the standard conclusion or
should I stop installing the second instance of the package along with the first instance by writing any wrapper script or by adding code in spec file section.
If 2 point is the answer how can achieve this. Please guide me about this confusion.
Assuming you only every want one version of an RPM installed at once, then yes use "rpm -U".
Creating an RPM that can have multiple versions installed requires that all common files between the versions are identical. This frequently happens, so you may get this behaviour "by default".
You can also prevent multiple versions with the following in you spec:
Conflicts : %{name} < %{version}

Resources