Build a requirements.txt for streamlit app that directs to python script, which in turn determine the proper version of package to be installed - pip

I am preparing a streamlit app for object detection that can do both training and inference.
I want to prepare a single requirements.txt file that can work whether when the app is run locally or if the app is deployed to streamlit cloud.
On streamlit cloud obviously, I won’t be doing training because of the need to have a GPU and in this case the user should clone the github repo for running locally.
Now, I come to the dependencies and requirements.txt. If I am running locally, I want to include say opencv-python but if I am running on streamlit cloud, I would want to include opencv-python-headless instead. PS: I have couple of cases like this; e.g pytorch+cuXX for local (GPU enabled) and pytorch for streamlit cloud (CPU only).
My question is how to reflect this in the requirements.txt file. I came across pip environment markers for conditional installation of pip packages, but the available environment markers cannot tell if the app is local or deployed. I wonder if there is a solution to this.
I came across this response which can help a lot, but as far as I understand it, I can make a setup.py and use subprocess.run to pip install the package. As I said, I would be doing previous work in this setup.py to determine the correct version for installation.
Given this (if this is a correct approach), then I call this setup.py from requirements.txt where each line represents something like a pip install <package_name>. I don't know yet how to do such a thing. If the total approach is not suitable for my case, I would appreciate your help.

Related

pip install psd-tools3 => FileNotFoundError: [Errno 2] No such file or directory

I was trying to install Ursina but I was having trouble getting all the required packages I needed to run my code properly. Come to find out, there's a package that refuses to install called 'psd-tools3' that won't install, no matter what I do.
I've been using cmd commands like 'pip install psd-tools3' and 'pip3 install psd-tools3' but no other commands work (i.e. 'sudo pip install psd-tools3' doesn't work because my PC doesn't know what 'sudo' means and doesn't run). I've tried installing required packages for this package, but nothing works. It just keeps giving me this error:
enter image description here
I would really appreciate the help with this problem. All I can really assume is that the Python file '_version' hasn't been created and that's what's throwing the whole program off. If there is a way to add this manually and then install it, I would appreciate steps to do that as well.
I was running this on a Lenovo Thinkpad (Windows 10) on Python 3.10 (I also have Python 3.8.3 but that was installed with the 3.10) and I made sure all packages and pip are up-to-date. Still having this problem and I don't know why.
Seems to me like the issue is on the side of the maintainers of psd-tools3.
For example, looking at the content of the latest source distribution on PyPI, we can see that it does not contain any _version.py file.
This needs to be solved by the project's maintainers, but they do not have a ticket tracker. On the other hand there seems to be an "Author" email address on the project's PyPI page as well as in the project's setup.py script.
A solution might be to clone the project's source code repository (with git), and try to install from the local clone.
Just simply try
pip install psd-tools3==1.9.0
Or
pip install psd-tools3==1.8.2
This should work on your pc as well. I was having same issue, and then I tried this It worked for me

In setup.py, how to run a system command before any pip install?

I am creating a Python package with setup.py, and I need to run certain shell commands before pip attempts to install dependencies. In fact, I need these commands to run before setuptools makes network calls to PyPI.
(The nitty gritty context is that the system installing this package has an internet gateway which requires a certificate to be installed. I need to apply this system change before setuptools reaches out to the internet)
I'm aware of cmdclass -- do those commands run before the install_requires stage?
You can't run arbitrary commands at install time (for the reasons linked to by phd in a comment to your question).
Maybe there are tricks to make it possible, but they are bad practice and not even worth the trouble.
What I would rather recommend to do is just clearly document the pre-installation steps, and maybe write yourself a shell script (or Python script) that wraps the custom pre-install commands and the actual installation command.
import os
os.system('cmd /c "Your Command Prompt Command"')
write this code to setup.py file before pip install code

requirements.txt vs Pipfile in heroku flask webapp deployment?

I'm trying to deploy a Flask webapp to Heroku and I have seen conflicting information as to which files I need to include in the git repository.
My webapp is built within a virtual environment (venv), so I have a Pipfile and a Pipfile.lock. Do I also need a requirements.txt? Will one supersede the other?
Another related question I have is what would occur if a certain package failed to install in the virtual environment: can I manually add it to the requirements.txt or Pipfile? Would this effectively do the same thing as pipenv install ... or is that doing something else beyond adding the package to the list of requirements (considering Heroku is installing the packages upon deployment).
You do not need requirements.txt.
The Pipfile and Pipfile.lock that Pipenv uses are designed to replace requirements.txt. If you include all three files, Heroku will ignore the requirements.txt and just use Pipenv.
If you have build issues with a particular library locally I urge you to dig into that and get everything working on your local machine. But this isn't technically required... as long as the Pipfile and Pipfile.lock contain the right information (including hashes), Heroku will try to install your dependencies.

Getting pipenv working in a Virtualenv where the App is working

I have my Django App working in a Virtualenv.
I would like to switch to pipenv. However, pipenv install fails with a dependency error.
Given that the App is working, I guess all the libraries are in the Virtualenv.
When getting the App working through Virtualenv + pip, I had to resolve the library dependency, but was able to and got it working. The thinking behind moving to pipenv is to avoid the dependency issues in a multiple member team setup.
Is there a way to tell pipenv to just take the versions of the libraries in the virtualenv and just go with it?
If you have a setup.py file you can install it with pipenv install .. Or even better, make it an editable development dependency: pipenv install -e . --dev.
You can also create a Pipfile/virtual env from a requirements.txt file. So you could do a pip freeze, then install from the requirements file.
Freezing your dependencies
From your working app virtual env, export your dependencies to a requirements file.
pip freeze > frozen-reqs.txt
Then create a new virtual env with pipenv, and install from the frozen requirements.
pipenv install -r frozen-reqs.txt
Then go into the Pipfile and start removing everything but the top level dependencies, and re-lock. Also where-ever possible, avoid pinning requirements as this makes dependency resolution much harder.
You can use pipenv graph and pipenv graph --reverse to help with this.

Conda environment from .yaml offline

I would like to create a Conda environment from a .yaml file on an offline machine (i.e. no Internet access). On an online machine this works perfectly fine:
conda env create -f environment.yaml
However, it doesn't work on an offline machine as the packages are then not found. How do I do this?
If that's not possible is there another easy way to get my complete Conda environment to an offline machine (including both Conda and pip installed packages)?
Going through the packages one by one to install them from the .tar.bz2 files works, but it is quite cumbersome, so I would like to avoid that.
If you can use pip to install the packages, you should take a look at devpi, particutlarily its server. devpi can cache packages normally installed from PyPI, so only on first install it actually retrieves them. You have to configure pip to retrieve the packages from the devpi server.
As you don't want to list all the packages and their dependencies by hand you should, on a machine connected to the internet:
install the devpi server (I have that running in a Docker container)
run your installation
examine the devpi repository and gathered all the .tar.bz2 and .whl files out of there (you might be able to tar the whole thing)
On the non-connected machine:
Install the devpi server and client
use the devpi client to upload all the packages you gathered (using devpi upload) to the devpi server
make sure you have pip configured to look at the devpi server
run pip, it will find all the packages on the local server.
devpi has a small learning curve, which already worth traversing because of the speed up and the ability to install private packages (i.e. not uploaded to PyPI) as a normal dependency, by just generating the package and upload it to your local devpi server.
I guess that Anthon's solution above is pretty good but just in case anybody is interested in an easy solution that worked for me:
I first created a .yaml file specifying the environment using conda env export > file.yaml. Following the instructions on http://support.esri.com/en/technical-article/000014951, I automatically downloaded all the necessary installation files for conda installed packages and created a channel from the files. For that, I just adapted the code from the link above to work with my .yaml file instead of the conda list file they used. In addition, I automatically downloaded the necessary files for the pip installed packages by looping through the pip entries in the .yaml file and using pip download for downloading each of them. Furthermore, I automatically created separate conda and pip requirement lists from the .yaml file. Then I created the environment using conda create with the offline flag, the file with the conda requirements and my custom channel. Finally, I installed the pip requirements using pip install with the pip requirements file and the folder containing the pip installation files for the option --find-links.
That worked for me. The only problem is that you can only download binaries with pip download if you need to specify a different operating system than the one you are running, and for some packages no binaries are available. That was okay for me now as the target machine has the some characteristics but might be problem in the future, so I am planning to look into the solution suggested by Anthon.

Resources