Getting pipenv working in a Virtualenv where the App is working - pip

I have my Django App working in a Virtualenv.
I would like to switch to pipenv. However, pipenv install fails with a dependency error.
Given that the App is working, I guess all the libraries are in the Virtualenv.
When getting the App working through Virtualenv + pip, I had to resolve the library dependency, but was able to and got it working. The thinking behind moving to pipenv is to avoid the dependency issues in a multiple member team setup.
Is there a way to tell pipenv to just take the versions of the libraries in the virtualenv and just go with it?

If you have a setup.py file you can install it with pipenv install .. Or even better, make it an editable development dependency: pipenv install -e . --dev.
You can also create a Pipfile/virtual env from a requirements.txt file. So you could do a pip freeze, then install from the requirements file.
Freezing your dependencies
From your working app virtual env, export your dependencies to a requirements file.
pip freeze > frozen-reqs.txt
Then create a new virtual env with pipenv, and install from the frozen requirements.
pipenv install -r frozen-reqs.txt
Then go into the Pipfile and start removing everything but the top level dependencies, and re-lock. Also where-ever possible, avoid pinning requirements as this makes dependency resolution much harder.
You can use pipenv graph and pipenv graph --reverse to help with this.

Related

How can I make anaconda automatically install jupyterlab extensions in every new environment I create?

I would like to have the exact same setup of jupyterlab in every new anaconda environment. Just like I can define some default packages to be installed when creating an environment with
conda config --add create_default_packages package1 package2
I would like to install a few jupyterlab extensions. I can install them by using the command
jupyter labextension install,
but this is a jupyterlab command and not a conda one. Is there a way of creating a script, that would execute only once after creating an environment, or some other mechanism that would let me automate this process?
With JupyterLab 3+.0+ you should not need to install extensions with jupyter labextension install; instead installation with pip install or conda install is now the recommended approach for most users (see documentation).
Extensions installable with pip/conda* do not require Node.js and are therefore more robust and user-friendly; we call them "prebuilt extensions", in contrast to the old "source extensions". We are considering removing support for installing source extensions by end users in a future version of JupyterLab (but not for advanced users and system administrators who should still be able to access this mechanism) as source extensions proved to be causing more trouble than benefit for an average user, and users so far were happy with the transition.
Please also see:
Unable to install jupyterlab-execute-time extension
RuntimeError: JupyterLab failed to build
If extension is not on conda-forge you can always contribute a recipe for it. If that's the case let me know and I can help you with the next steps.
*) or any other package manager which is able to place a .js file in appropriate location - this is not limited to Python ecosystem

requirements.txt vs Pipfile in heroku flask webapp deployment?

I'm trying to deploy a Flask webapp to Heroku and I have seen conflicting information as to which files I need to include in the git repository.
My webapp is built within a virtual environment (venv), so I have a Pipfile and a Pipfile.lock. Do I also need a requirements.txt? Will one supersede the other?
Another related question I have is what would occur if a certain package failed to install in the virtual environment: can I manually add it to the requirements.txt or Pipfile? Would this effectively do the same thing as pipenv install ... or is that doing something else beyond adding the package to the list of requirements (considering Heroku is installing the packages upon deployment).
You do not need requirements.txt.
The Pipfile and Pipfile.lock that Pipenv uses are designed to replace requirements.txt. If you include all three files, Heroku will ignore the requirements.txt and just use Pipenv.
If you have build issues with a particular library locally I urge you to dig into that and get everything working on your local machine. But this isn't technically required... as long as the Pipfile and Pipfile.lock contain the right information (including hashes), Heroku will try to install your dependencies.

Best practices with pip and conda for consistency

I know there are a lot of questions on the coexistence and interchangeability/non-interchangeability of pip and conda. That is not my question: I know I need both for my work, I use both, and for the most part, my conda envs are a manageable mess.
But here's the thing: there's many ways to install pip. I happened to get conda going first, so my pip is through anaconda/bin/pip. It is the only pip on my machine. Here are my questions:
Is this sensible? Do I want my pip to be usr/bin/pip and be independent of global conda? It feels not-sensible.
If I install a new pip through say brew or easy_install, should I start downloading packages through this new pip? Would that be awful and mess everything up?
Thanks!
Pip always requires a version of Python to be installed, and is associated with that specific Python installation. By default, pip installs packages for its own Python, into the related site-packages directory inside the Python library directory. The exact location of this directory depends on your operating system and how you installed conda.
If you install pip via Homebrew or with another installation of Python, you should not use that pip and expect it to install for conda. For that matter, if you create a new conda environment, you should not expect that the pip in that environment will install packages into another environment.
There is the --user option to pip, which installs packages into a directory in your user account (on *nix systems, this is ~/.local; I can't recall for Windows where this is). These packages will be able to be found by all Python versions with the same major and minor version number. However, it is not recommended to install packages with the intent of sharing them among several Pythons this way, because if the different Pythons were compiled with different compilers, you may run into trouble.

conda dependencies install and management

I am struggling with versionning and dependencies with conda and python packages.
When doing : conda install -c conda-forge qt==5.6.2
it installs all the dependencies or None of them (-no-dependencies).
1) How to install/update selectively the dependencies ?
(because some cause breakage for other packages).
2) I have a sandbox envs in conda where I test the install+regression test.
But, it works, I would like to reproduce the install in other environnment.
Is it a way to modify directly the config file of the environnement and add manually the new packages ?
For regression test, am also using
https://github.com/pelson/conda-execute
which allows temp envs setup with dependencies.
If it can help other people, stuck in this situation,
work around is using --force :
conda -c channel install packagename --force
it will install only the package.
If you want to selectively install packages,
conda -c channel packagename
and you can get the list of dependencies from where you can choose to install.

Forcing pip to look at a locally installed package instead of a globally installed package

I'm using pip to install a software package with multiple dependencies onto a linux environment. Everything runs smooth when I call the pip install <package> up until the very end, when I get the error that a globally installed dependency package is out of date, meaning I should update the global version. Due to reasons beyond my control, doing this is out of the question. So, I installed an updated version to ~/bin. How do I tell pip to look there for the updated version?

Resources