Say you regularly use a large python dependency like tensorflow but you want to create siloed virtual environments for each separate project.
If I download and install tensorflow to my system using pip, is there a way to tell pipenv to use the previously downloaded dependency instead of doing a slow and high-bandwidth re-download per virtual environment I set up?
Related
I would like to have the exact same setup of jupyterlab in every new anaconda environment. Just like I can define some default packages to be installed when creating an environment with
conda config --add create_default_packages package1 package2
I would like to install a few jupyterlab extensions. I can install them by using the command
jupyter labextension install,
but this is a jupyterlab command and not a conda one. Is there a way of creating a script, that would execute only once after creating an environment, or some other mechanism that would let me automate this process?
With JupyterLab 3+.0+ you should not need to install extensions with jupyter labextension install; instead installation with pip install or conda install is now the recommended approach for most users (see documentation).
Extensions installable with pip/conda* do not require Node.js and are therefore more robust and user-friendly; we call them "prebuilt extensions", in contrast to the old "source extensions". We are considering removing support for installing source extensions by end users in a future version of JupyterLab (but not for advanced users and system administrators who should still be able to access this mechanism) as source extensions proved to be causing more trouble than benefit for an average user, and users so far were happy with the transition.
Please also see:
Unable to install jupyterlab-execute-time extension
RuntimeError: JupyterLab failed to build
If extension is not on conda-forge you can always contribute a recipe for it. If that's the case let me know and I can help you with the next steps.
*) or any other package manager which is able to place a .js file in appropriate location - this is not limited to Python ecosystem
I have been using venv to create virtual environments to work with Jupyter Lab. I tried Anaconda for awhile, but couldn't get the widgets working. I went back to a pip,venv setup and everything worked. Then after not using the setup for awhile, Jupyter Lab was freezing when I pressed CTRL+F to find where a variable was being used. It proceeded to freeze even after restarting the kernel, even after deactivating and reactivating the environment. The folder the environment was in won't let me delete it. Creating a new environment to start from scratch didn't fix it. Reinstalling Python and creating a new environment didn't fix it. I see that Pip has cached a lot of the packages and so installing things are pulled from the cached, event after reinstalling Python.
I want to remove everything related to the previous installation and start fresh, but am having trouble doing that. Any advice would be helpful.
Windows 10
Python 3.8.5 is the most recent version used.
Use pip list to list all package (from the old python the one you want to uninstall). Then copy all the packages and put the in a --requirement file with all the packages installed in it. (how to specify --requirement file) Then use the following command to uninstall all the old packages.
pip uninstall [options] -r <requirements file>
I would like to create a Conda environment from a .yaml file on an offline machine (i.e. no Internet access). On an online machine this works perfectly fine:
conda env create -f environment.yaml
However, it doesn't work on an offline machine as the packages are then not found. How do I do this?
If that's not possible is there another easy way to get my complete Conda environment to an offline machine (including both Conda and pip installed packages)?
Going through the packages one by one to install them from the .tar.bz2 files works, but it is quite cumbersome, so I would like to avoid that.
If you can use pip to install the packages, you should take a look at devpi, particutlarily its server. devpi can cache packages normally installed from PyPI, so only on first install it actually retrieves them. You have to configure pip to retrieve the packages from the devpi server.
As you don't want to list all the packages and their dependencies by hand you should, on a machine connected to the internet:
install the devpi server (I have that running in a Docker container)
run your installation
examine the devpi repository and gathered all the .tar.bz2 and .whl files out of there (you might be able to tar the whole thing)
On the non-connected machine:
Install the devpi server and client
use the devpi client to upload all the packages you gathered (using devpi upload) to the devpi server
make sure you have pip configured to look at the devpi server
run pip, it will find all the packages on the local server.
devpi has a small learning curve, which already worth traversing because of the speed up and the ability to install private packages (i.e. not uploaded to PyPI) as a normal dependency, by just generating the package and upload it to your local devpi server.
I guess that Anthon's solution above is pretty good but just in case anybody is interested in an easy solution that worked for me:
I first created a .yaml file specifying the environment using conda env export > file.yaml. Following the instructions on http://support.esri.com/en/technical-article/000014951, I automatically downloaded all the necessary installation files for conda installed packages and created a channel from the files. For that, I just adapted the code from the link above to work with my .yaml file instead of the conda list file they used. In addition, I automatically downloaded the necessary files for the pip installed packages by looping through the pip entries in the .yaml file and using pip download for downloading each of them. Furthermore, I automatically created separate conda and pip requirement lists from the .yaml file. Then I created the environment using conda create with the offline flag, the file with the conda requirements and my custom channel. Finally, I installed the pip requirements using pip install with the pip requirements file and the folder containing the pip installation files for the option --find-links.
That worked for me. The only problem is that you can only download binaries with pip download if you need to specify a different operating system than the one you are running, and for some packages no binaries are available. That was okay for me now as the target machine has the some characteristics but might be problem in the future, so I am planning to look into the solution suggested by Anthon.
I'm using pip to install a software package with multiple dependencies onto a linux environment. Everything runs smooth when I call the pip install <package> up until the very end, when I get the error that a globally installed dependency package is out of date, meaning I should update the global version. Due to reasons beyond my control, doing this is out of the question. So, I installed an updated version to ~/bin. How do I tell pip to look there for the updated version?
I am interested in getting TensorFlow to run on Windows, however at present I realize that this is not possible due to some of the dependencies not being usable with Windows, e.g. Bazel.
The need arises because as I currently understand it the only way to access the GPU from TensorFlow is via a non-virtual install of Linux. I do realize I can dual boot into a Linux install, but would prefer to avoid that route.
To resolve the problem I am in need of the entire dependency chain to build TensorFlow as was wondering if this already existed.
I also realize that I can capture the build output when building from source as a solid start, but would like to avoid that work if it is already known.
There is a beta of Bazel that runs on Windows - https://github.com/dslomov/bazel-windows
See related GitHub Issue to run TensorFlow on Windows. - https://github.com/tensorflow/tensorflow/issues/17
Another reason to run on Windows is the possibility to port to Xbox One.
I found a possible answer, still need to check it. This will generate a dependency graph as a dot file.
$ bazel query 'deps(//tensorflow/tools/pip_package:build_pip_package)' --output graph > tensorflow.dependency.dot
There are now three main options for building and/or running TensorFlow on Windows:
You can install a GPU-enabled PIP package of TensorFlow 0.12rc0 from PyPI: pip install tensorflow-gpu
You can build the GPU-enabled PIP package yourself using the experimental CMake build. This also gives you the ability to work with TensorFlow in Visual Studio. The documentation for this build can be found here.
There is preliminary support for building TensorFlow using Bazel for Windows. However, we are still ironing out some bugs with this build.
This may not be exactly what you want one way to run TensorFlow under Windows is to install a virtual machine (VMWare player v12 is free to use for non-commercial) and then install Ubuntu in that and finally TensorFlow in Ubuntu. Works well for me.
Since the begin of 2017, Tensorflow is now officially supported on Windows and can be installed via pip:
pip install --upgrade tensorflow
pip install --upgrade tensorflow-gpu
or by fetching packages directly (pick the one that matches your needs, e.g. x64/gpu)
# x86 / CPU
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0-cp35-cp35m-win_x86_64.whl
# x64 / CPU
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0-cp35-cp35m-win_amd64.whl
# x64 / GPU
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.0.0-cp35-cp35m-win_amd64.whl