I have recently built the h2o4gpu docker image using the Dockerfile-runtime, and managed to run it and log into the Jupyter notebooks.
However, when trying to run
import h2o4gpu
I get the error that there is no h2o4gpu module. After, I tried installing by adding the below command to the dockerfile.
pip install --extra-index-url https://pypi.anaconda.org/gpuopenanalytics/simple h2o4gpu
pip install h2o4gpu-0.2.0-cp36-cp36m-linux_x86_64.whl
This also failed, so I was wondering if there were other changes I should make, or if I should be making the docker file from scratch.
Thank you
To build the project, you can follow this recipe:
git clone https://github.com/h2oai/h2o4gpu.git
cd h2o4gpu
make centos7_cuda9_in_docker
This will work on either an x86_64 or ppc64le host with a modern docker installed.
The python .whl file artifact is written to the dist directory.
Even if the build process is significantly refactored, this style of build API is very likely to remain.
Related
I was trying to install Ursina but I was having trouble getting all the required packages I needed to run my code properly. Come to find out, there's a package that refuses to install called 'psd-tools3' that won't install, no matter what I do.
I've been using cmd commands like 'pip install psd-tools3' and 'pip3 install psd-tools3' but no other commands work (i.e. 'sudo pip install psd-tools3' doesn't work because my PC doesn't know what 'sudo' means and doesn't run). I've tried installing required packages for this package, but nothing works. It just keeps giving me this error:
enter image description here
I would really appreciate the help with this problem. All I can really assume is that the Python file '_version' hasn't been created and that's what's throwing the whole program off. If there is a way to add this manually and then install it, I would appreciate steps to do that as well.
I was running this on a Lenovo Thinkpad (Windows 10) on Python 3.10 (I also have Python 3.8.3 but that was installed with the 3.10) and I made sure all packages and pip are up-to-date. Still having this problem and I don't know why.
Seems to me like the issue is on the side of the maintainers of psd-tools3.
For example, looking at the content of the latest source distribution on PyPI, we can see that it does not contain any _version.py file.
This needs to be solved by the project's maintainers, but they do not have a ticket tracker. On the other hand there seems to be an "Author" email address on the project's PyPI page as well as in the project's setup.py script.
A solution might be to clone the project's source code repository (with git), and try to install from the local clone.
Just simply try
pip install psd-tools3==1.9.0
Or
pip install psd-tools3==1.8.2
This should work on your pc as well. I was having same issue, and then I tried this It worked for me
I used pip install fastapi[all]
I added cargo to path,
Your problem comes from orjson, it is a Rust based library and seems like you have done something wrong with your Package manager (Cargo).
Consider Containerizing your working environment, or you can install fastapi without [all] option. By doing that, pip will not install orjson.
Just started using docker.
I want to install numphy, scipy etc from bash
i.e
PS H:> docker run -it python:3.4 bash
then
....:/# install requests
....:/# pip install numphy
I'd expect this to work but for some reason I get the error:
Could not find a version that satisfies the requirement numphy (from versions: )
No matching distribution found for numphy
Not really sure what to do from here - any help would be most appreciated.
Are you trying to install numpy? You need to use:
pip install numpy
Not:
pip install numphy
That package (numphy) isn't found because it doesn't exist. You either misspelled it as noted or you don't have the files (if it's a package you'r developing locally) inside the container to install it.
I am interested in getting TensorFlow to run on Windows, however at present I realize that this is not possible due to some of the dependencies not being usable with Windows, e.g. Bazel.
The need arises because as I currently understand it the only way to access the GPU from TensorFlow is via a non-virtual install of Linux. I do realize I can dual boot into a Linux install, but would prefer to avoid that route.
To resolve the problem I am in need of the entire dependency chain to build TensorFlow as was wondering if this already existed.
I also realize that I can capture the build output when building from source as a solid start, but would like to avoid that work if it is already known.
There is a beta of Bazel that runs on Windows - https://github.com/dslomov/bazel-windows
See related GitHub Issue to run TensorFlow on Windows. - https://github.com/tensorflow/tensorflow/issues/17
Another reason to run on Windows is the possibility to port to Xbox One.
I found a possible answer, still need to check it. This will generate a dependency graph as a dot file.
$ bazel query 'deps(//tensorflow/tools/pip_package:build_pip_package)' --output graph > tensorflow.dependency.dot
There are now three main options for building and/or running TensorFlow on Windows:
You can install a GPU-enabled PIP package of TensorFlow 0.12rc0 from PyPI: pip install tensorflow-gpu
You can build the GPU-enabled PIP package yourself using the experimental CMake build. This also gives you the ability to work with TensorFlow in Visual Studio. The documentation for this build can be found here.
There is preliminary support for building TensorFlow using Bazel for Windows. However, we are still ironing out some bugs with this build.
This may not be exactly what you want one way to run TensorFlow under Windows is to install a virtual machine (VMWare player v12 is free to use for non-commercial) and then install Ubuntu in that and finally TensorFlow in Ubuntu. Works well for me.
Since the begin of 2017, Tensorflow is now officially supported on Windows and can be installed via pip:
pip install --upgrade tensorflow
pip install --upgrade tensorflow-gpu
or by fetching packages directly (pick the one that matches your needs, e.g. x64/gpu)
# x86 / CPU
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0-cp35-cp35m-win_x86_64.whl
# x64 / CPU
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0-cp35-cp35m-win_amd64.whl
# x64 / GPU
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.0.0-cp35-cp35m-win_amd64.whl
I am trying to install some python requirements from a local package directory containing wheel archives. I am installing the requirements inside a Docker container.
The steps I'm following are:
$ pip install wheel
# wheel runs, outputs .whl files to wheelhouse directory
$ pip wheel --wheel-dir wheelhouse -r requirements.txt
Then, inside my Dockerfile:
ADD requirements.txt /tmp/requirements.txt
ADD wheelhouse /tmp/wheelhouse
# install requirements. Leave file in /tmp for now - may be useful.
RUN pip install --use-wheel --no-index --find-link /tmp/wheelhouse/ -r /tmp/requirements.txt
This works - and all the requirements are installed correctly:
# 'app' is the name of my built docker image
$ docker run app pip list
...
psycopg2 (2.5.1)
...
However, if I actually try running something inside the container that uses psycopg2, then I get the following:
Error loading psycopg2 module: /usr/local/lib/python2.7/site-packages/psycopg2/_psycopg.so: undefined symbol: PyUnicodeUCS4_AsUTF8String
I presume that this is something to do with the way in which the wheels were built - I ran pip wheel on the container host machine (Ubuntu 12.04).
How can I fix this - using wheels significantly reduces the time taken to build the container image, so I don't want to revert to installing packages if I can help it?
I don't know what a wheel or a docker is, but your error comes from a mismatch between the Python used to build the module and the one that is trying to run it.
In my experience, psycopg2 can be rather finicky when installing/building from source, so am not surprised that it doesn't package into a wheel. However, could you simply wheel everything apart from psycopg2? Still would save you a heap of time.