Access old pytorch release from conda cloud - anaconda

According to multiple sites, there was a binary release 0.2.1 for pytorch in repo peterjc123 (e.g. https://moodle.di.ens.fr/mod/forum/discuss.php?d=9#p33 ).
I also see a release 0.3.0 when looking at the only snapshot from archive.org.
However, the conda cloud website only shows the latest version (0.3.1; https://anaconda.org/peterjc123/pytorch/files ); the same applies to
conda search pytorch -c peterjc123
The old download links do not work anymore.
How do I access the old version (I need a binary < 0.3 for windows 10; cuda80; py36)?

Consider using the binaries uplodaed by user Soumith, who is also the uploader for the (now stable branch) pytorch/pytorch on Anaconda cloud.
This channel has versions back to 0.2.1, so it should be satisfying your requirements.
If that does not work, also consider installing with regular pip install and specifying a version of your desire.
Edit: There might also be older versions for pytorch_cpu and other packages, I only checked for the "main" pytorch package.

Related

How to instruct pip to only install dependencies that are newer than the presently installed versions?

I realize that a requirements.txt file can be used to pin the versions used in pip install. Sometimes I don't want to go through all that- and just simply want to protect my installation from downgrades. Is there any way to instruct pip install to do that?
An example: I just installed librosa and it downgraded numpy from 1.24.1 to 1.23.5 . I don't want that behavior to happen unless I explicitly request. On the other hand if there are missing dependencies then please let's grab them.
For this installation of python it is acceptable to take the risk of occasionally ending up with a mismatch due to installing newer versions [ but I don't want older ones].

Can't install recent version of conda-forge package

I maintain a conda-forge package called switch_model. Subsequent to our last release (2.0.5), one of the packages we depend on has made an incompatible change. So I am trying to publish a post-release, 2.0.5.post2, that requires an older version of that package.
I've managed to create the post-release on PyPi and I can install successfully with pip. I also updated my meta.yaml for the recipe and pushed that to github (https://github.com/conda-forge/switch_model-feedstock/blob/master/recipe/meta.yaml).
Now, the conda-forge website at https://anaconda.org/conda-forge/switch_model identifies the latest version as 2.0.5.post2. But when I try to install to my computer using conda install -c conda-forge switch_model, it says it will install the older 2.0.5 version. If I try conda install -c conda-forge switch_model=2.0.5.post2, I get a message that it cannot be found. However, if I use conda install -c conda-forge/label/main switch_model, it installs the latest version (2.0.5.post2).
So as things stand, the new version is on conda-forge, but people who try to install my package will still get the old version with the wrong dependencies, and it won't work.
Does anyone know how to get conda to automatically install the post-release version? It's possible that I needed to fork the switch_model-feedstock repository into my personal account on github, then do a pull request back to the conda-forge account. But I'm not sure if that would have made a difference (I don't think I did that for the original 2.0.5 version), and I'm not sure how I would do it retroactively, since I've already pushed the new version of meta.yaml into the conda-forge version of the repository.
Update
By the time I finished writing this question, the 2.0.5.post2 version is now installing by default. So I may have just needed to wait until something happened in the delivery system. So my question now is, is there anything I could have done to test that the new version of the package would soon be available to users (e.g., clear some cache of available versions)? Would it make a difference if I updated the package via a pull request from another repository instead of pushing directly to the conda-forge version?
By the time I finished writing this question, the 2.0.5.post2 version is now installing by default. So I may have just needed to wait until something happened in the delivery system.
Packages can take some time (~1 hour) to actually be installable through conda, even if they appear on anaconda.org.
So my question now is, is there anything I could have done to test that the new version of the package would soon be available to users (e.g., clear some cache of available versions)?
Not completely sure what's being asked here
If you're asking if you can force your users to update their version, no.
If you wish to ensure the build is not broken, you can run its tests during the build process: https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#test-commands
If your concern is the discrepancy between it seemingly being online and Anaconda's servers actually delivering it to users via conda, not really. Once a build passes all of the status checks that conda-forge's bots do, there are vanishingly few reasons why it would fail to be available to users later.
Would it make a difference if I updated the package via a pull request from another repository instead of pushing directly to the conda-forge version?
No, and in general it's best to, wherever possible, to stay within the infrastructure that conda-forge has already built out.

Install things on Pepper

How would I install things on Pepper, since I don't know what package manager it uses. I usually use apt on my Ubuntu machine and want to install some packages on Pepper. I'm not sure what package manager Pepper has (if any) and want to install some packages, but also only know the name of the package using apt (not sure if the package name is the same on other package managers). And if possible, would I be able to install apt on Pepper. Thanks.
Note: From the research I've done, Pepper is using NaoQi which is based off Gentoo which uses portage.
You don't have root access on Pepper, which limits what you can install (and apt isn't on the robot anyway).
Some possibilities:
Include your content in Choregraphe projects - when you install a package, the whole directory structure is installed (more exactly, what's listed in the .pml); so you can put arbitrary files on your robot, and you can usually include whatever dependencies your code needs.
Install python packages with pip.
In NAOqi 2.5, a slightly older version of pip is installed that will not always work out of the box; I recommend upgrading it:
pip install --user --upgrade pip
... you can then use the upgraded pip to install other packages, using the upgraded pip, and always --user:
/home/nao/.local/bin/pip install --user whatever-package-you-need
Note however that if you do this and use your packages in your code running on Pepper, that code won't work on other robots until you do pip on them, which is why I usually only do this for tests; for production code I prefer packaging all dependencies in my app's package.
As a workaround if you need to install software (or just newer versions of software) using Gentoo Prefix is an option.
Gentoo Prefix builds a Gentoo OS on any location (no need of root, can be any folder). It includes it's own portage (package manager) to install new software.
I maintain a few projects to work with Pepper and use "any" software I want. Note that they are built for 64b (amd64) and 32b (x86) even though for Pepper only the 32b matter.
gentoo_prefix_ci and gentoo_prefix_ci_32b Which builds nightly the bootstrap of the Gentoo Prefix system. This is a process that takes a while to compile (3-6h depending on your machine) and that breaks from time to time (as upstream packages are updated and bugs are found, Gentoo is a rolling release distribution). Every night updated binary images ready to use can be found in the Releases section.
For ROS users that want to run it on the robot, based on the previous work, I maintain also ros_overlay_on_gentoo_prefix and ros_overlay_on_gentoo_prefix_32b. They provide nightly builds with binary releases of ROS Kinetic and ROS Melodic over Gentoo Prefix using ros-overlay. You can find ready-to-use 'ros_base' and 'desktop' releases.
For purposes related to the RoboCup#Home Social Standard Platform League where the Pepper robot is used I also maintain a specific build that contains a lot of additional software. This project is called pepper_os and it builds 270+ ROS packages, a lot of Python packages (250+ including Theano, dlib, Tensorflow, numpy...) and all the necessary dependencies for these to build (750+ packages). Note that the base image (it's built with Docker) is the actual Pepper 2.5.5.5 image, so it can be used for debugging as it if it was in the real robot (although without sensors and such).
Maybe this approach, or these projects, are useful.
The package manager on pepper is disabled. But you can copy the files to the robot and write your own service that imports any package you might need.
As a supplement on importing:
http://www.about-robots.com/how-to-import-python-files-in-your-pepper-apps.html
To get rid of error :
" SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
".
If you use python and requests package, just add verify=False at the end of your parameters.
r=requests.get(URL,params,header,verify=False)
Works with my Pepper
To get rid of
InsecurePlatformWarning: A true SSLContext object is not available.
install
/home/nao/.local/bin/pip install --user requests[security]
To get rid of:
CryptographyDeprecationWarning: Support for your Python version is deprecated.
install
/home/nao/.local/bin/pip install --user cryptography==2.2.2
If it based on Gentoo maybe we could try to install portage with pip.
pip install portage
Just a thought.

Running Scipy on Heroku

I got Numpy and Matplotlib running on Heroku, and I'm trying to install Scipy as well. However, Scipy requires BLAS[1] to install, which is not presented on the Heroku platform. After contacting Heroku support, they suggested me to build BLAS as a static library to deploy, and setup the necessary environment variables.
So, I compiled libblas.a on a 64bit Linux box, and set the following variables as described in [2] :
$ heroku config
BLAS => .heroku/vendor/lib/libfblas.a
LD_LIBRARY_PATH => .heroku/vendor/lib
LIBRARY_PATH => .heroku/vendor/lib
PATH => bin:/usr/local/bin:/usr/bin:/bin
PYTHONUNBUFFERED => true
After adding scipy==0.10.1 in my requirements.txt, the push still fails.
File "scipy/integrate/setup.py", line 10, in configuration
blas_opt = get_info('blas_opt',notfound_action=2)
File "/tmp/build_h5l5y31i49e8/lib/python2.7/site-packages/numpy/distutils/system_info.py", line 311, in get_info
return cl().get_info(notfound_action)
File "/tmp/build_h5l5y31i49e8/lib/python2.7/site-packages/numpy/distutils/system_info.py", line 462, in get_info
raise self.notfounderror(self.notfounderror.__doc__)
numpy.distutils.system_info.BlasNotFoundError:
Blas (http://www.netlib.org/blas/) libraries not found.
Directories to search for the libraries can be specified in the
numpy/distutils/site.cfg file (section [blas]) or by setting
the BLAS environment variable.
It seem that pip is not aware of the BLAS environment variable, so I check the environment using heroku run python:
(venv)bash-3.2$ heroku run python
Running python attached to terminal... import up, run.1
Python 2.7.2 (default, Oct 31 2011, 16:22:04)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.system('bash')
~ $ echo $BLAS
.heroku/vendor/lib/libfblas.a
~ $ ls .heroku/vendor/lib/libfblas.a
.heroku/vendor/lib/libfblas.a
~ $
And it seems fine. Now I have no idea how to solve this.
[1] http://www.netlib.org/blas/
[2] http://www.scipy.org/Installing_SciPy/Linux
I managed to get this working on the cedar stack by building numpy and scipy offline as bdists and then modifying the heroku python buildpack to unzip these onto the dyno's vendor/venv areas directly. You can also use the buildpack to set environment variables that persist.
Heroku haven't officially published buildpacks yet - search for 'heroku buildpacks' for more thirdparty/heroku ones and information.
My fork of the python build pack is here:
https://wyn#github.com/wyn/heroku-buildpack-python.git
The changes are in the bin/compile where I source two new steps, a scipy/numpy step and an openopt step. The scripts for the two steps are in bin/steps/npscipy and bin/steps/openopt. I also added some variables to bin/release. Note that I am assuming installation through a setup.py file rather than the requirements.txt approach (c.f. https://devcenter.heroku.com/articles/python-pip#traditional_distributions).
I also download the blas/lapack/atlas/gfortran binaries that were used to build numpy/scipy onto the dyno as there are c extensions that need to link through to them. The reason for building everything offline and downloading is that pip-installing numpy/scipy requires you to have a fortran compiler + associated dev environment and this made my slugs too big.
It seems to work, the slug size is now 35mb and scaling seems fast too. All but one of the numpy tests pass and all of the scipy tests pass.
This is still work in progress for me but I thought I'd share.
In case someone else comes to this like I did...
The excellent answer on this question from #coshx sadly no longer works (at least I could not get it to work). What I did, however was the following:
I forked npsicpy-binaries repository from #coshx and changed all the tar files such that they do not have venv as the root folder inside (my fork is here)
I forked the npsp-helloworld repository from #coshx and made it use a requirements.txt file instead of the setup.py (my fork is here - this means that you can use the whole pip approach).
I forked the heroku-buildpack-python repository from Heroku, took the npscipy file from #coshx and changed it to work with this latest version of the build pack (my fork is here - you can see that there is no venv set up, for example).
After doing those three things I have the npsp-helloworld application working perfectly. You just need to make sure you set the buildpack appropriately when creating the application and you are good to go:
$ heroku create --stack=cedar --buildpack=https://github.com/kmp1/heroku-buildpack-python.git
NOTE: I haven't worked out how to make my own binaries yet, so the three libraries (scipy, numpy and scikit-learn) are not the latest versions so make sure when you install it you do (if anyone can build these I would gladly accept a pull request for them):
pip install scipy==0.11.0
pip install numpy==1.7.0
pip install scikit-learn==0.13.1
By the way - I am really sorry if I have not done things in the correct way, etiquette-wise. I'm still learning git and this whole open source thing.
Another good option is the conda buildpack, which allows you to add any of the free Linux64 packages available through Anaconda/Miniconda to a Heroku app. Some of the most popular packages include numpy, scipy, scikit-learn, statsmodels, pandas, and cvxopt. While the buildpack makes it fairly simple to add packages to an app, the downsides are that the buildback takes up a lot of space and that you have to wait on Anaconda to update the libraries in the repository.
If you are starting a new Python app on Heroku, you can add the conda buildpack using the command:
$ heroku create YOUR_APP_NAME --buildpack https://github.com/kennethreitz/conda-buildpack.git
If you have already setup a Python app on Heroku, you can add the conda buildpack to the existing app using the command:
$ heroku config:add BUILDPACK_URL=https://github.com/kennethreitz/conda-buildpack.git
Or, if you need to specify the app by name:
$ heroku config:add BUILDPACK_URL=https://github.com/kennethreitz/conda-buildpack.git --app YOUR_APP_NAME
To use the buildpack, you will need to include two text files in the app directory, requirements.txt and conda-requirements.txt. Just as with the standard Python buildpack, the requirements.txt file lists packages that should be installed using pip. Packages that should be installed using conda are listed in the conda-requirements.txt file. Some of the most useful scientific packages include numpy, scipy, scikit-learn, statsmodels, pandas, and cvxopt. The full list of available conda packages can be found at repo.continuum.io.
For example:
$ cat requirements.txt
gunicorn==0.14.2
requests==0.11.1
$ cat conda-requirements.txt
scipy
numpy
cvxopt
That’s it! You can now add Anaconda packages to a Python app on Heroku.
The slug compiler is not aware of your environment variables, which is why it fails during push, and not once running.
The only real option you have is to look at the user_env_compile addon that's currently in labs beta.
http://devcenter.heroku.com/articles/labs-user-env-compile
For those who wish to use Python 3.4 in production, I've built numpy 1.8.1, scipy 0.14.0, and scikit-learn 0.15-git (0.14 doesn't work with the others for some reason) as binaries on Ubuntu 10.04 LTS 64-bit, which works on the Heroku cedar stack. Here is the git repo.
My heroku buildpack is quite similar to that of kmp. Note that the bin/steps/npscipy file links to my repository of binaries above.
I'm putting this here in case someone stumbled on this like I do. Since I am using python 3.4, thenovices buildpack doesn't work out for me.
I tried to look at conda buildpack, but it seems like a little bit of an overkill for my application which only requires scipy and numpy. What finally works for me is this excellent guide, using a multi buildpacks approach. The scipy installation was indeed a bit long though. Hope this helps! :)

How can I ensure that I install compatible versions of JAGS and rjags in Ubuntu?

I would like to write a script that installs JAGS and rjags on a new ubuntu installation, and that will work independently of the currently available versions of these packages. I'd like to know how I can do this while avoiding conflicts between versions.
I have the following R script, initialize.R:
system('apt-get install jags')
install.packages('rjags')
So that I can run from bash:
sudo R --vanilla < initialize.R
However, the most recent version of JAGS in the Ubuntu repository is 2.2, and the version of rjags available from CRAN depends on JAGS > 3.0.
I am interested in installing compatable JAGS and rjags, perhaps either:
installing a specific version of JAGS (e.g. 2.2) and the compatable rjags version (which version?)
generically installing the version of JAGS currently in the ubuntu repository and the appropriate rjags version, or
generically installing the most recent version of rjags at cran and the appropriate version of JAGS
The ability to do case 1 is essential, but am also curious how I could implement cases 2 and/or 3.
questions:
How can I do this?
Is there a more sound approach?
update: following link in Dirk's answer, the following worked:
add-apt-repository ppa:marutter/rrutter
apt-get update
apt-get install r-cran-rjags
Michael Rutter provides an r-cran-rjags package via his repository; this works with the jags package you already have installed. See this message on r-sig-devel for details, and you may want to subscribe to and follow the r-sig-debian list to stay abreast of these things.

Resources