Tensorflow installation killed on AWS ec2 instance - amazon-ec2

I'm trying to use AWS EC2 instance for to test my ML project. During the package installation process of TensorFlow getting kills every time.
I'm using AWS trial EC2 t2.micro type instance for my testing purposes.
Type: t2.micro
vCPUs: 1
Memory: 1GB
Os: Ubuntu Server 20.04 LTS (HVM), SSD Volume Type
Is there any soulutions for this?

I had the exact same problem, and I finally fixed it by doing the following:
Using free tier instance, but with 25 Gib of hard disk storage (currently, free tier covers up to 30 Gib without extra cost)
Install tensorflow by running pip install tensorflow-cpu instead of just pip install tensorflow

It is difficult to state an exact memory requirement for installing Tensorflow and its dependencies, but most likely this instance size is too small. You could verify it works with a larger instance and/or try this

It can be fixed with the following command:
pip install tensorflow-cpu
instead of just:
pip install tensorflow

Use --no-cache-dir argument with the pip installation command to resolve the tensorflow installation killed issue in aws ec2 instances.
pip3 install tensorflow --no-cache-dir

Found out that the issues is I have not enough space in my EC2 instance. By running this command.
sudo pip install --cache-dir=/data/jimit/ --build /data/jimit/ tensorflow TMPDIR==/data/jimit/
It outputs below error.
ERROR: Could not install packages due to an EnvironmentError: [Errno 28] No space left on device
Found it here. Cradits to #bigmac

I had the same issue and
pip install tensorflow-cpu
was also getting killed.
I used
pip3 install tensorflow-cpu --no-cache-dir
which worked.

Related

How to reinstall libjudy on AWS EC2 Amazon Linux instance

I am trying to install a memory profiler (https://github.com/arnaud-lb/php-memory-profiler) on my EC2 instance (LAMP stack runnign php 7.2) as we are running in to memory allocation errors. During the installation, I get the following error:
checking for judy lib... yes, shared
checking for judy files in default path... not found
configure: error: Please reinstall the judy distribution
ERROR: `/tmp/pear/install/memprof/configure --with-php-config=/usr/bin/php-config' failed
For the life of me, I cannot figure out how to reinstall the judy library. I've tried both pecl (No releases available for package "pecl.php.net/libjudy") and yum (No package libjudy available.)
I've searched for ways to install it and have come up empty.
Anyone have any advice?
Thanks in advance.
p.s. I have also asked this question of the memory profile developer.
try the following repository
Download latest rpmforge-release rpm from
wget https://ftp.tu-chemnitz.de/pub/linux/dag/redhat/el7/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm
Install rpmforge-release rpm:
# rpm -Uvh rpmforge-release*rpm
Install judy rpm package:
# yum install judy judy-devel

Where do I get a CPU-only version of PyTorch?

I'm trying to get a basic app running with Flask + PyTorch, and host it on Heroku. However, I run into the issue that the maximum slug size is 500mb on the free version, and PyTorch itself is ~500mb.
After some google searching, someone wrote about finding a cpu-only version of PyTorch, and using that, which is much smaller here.
However, I'm pretty lost as to how this is done, and the person didn't document this at all. Any advice is appreciated, thanks.
EDIT:
To be more specific about my problem, I tried installing torch by (as far as I understand), including a requirements.txt which listed torch as a dependency. Current I have: torch==0.4.1. However this doesn't work bc of size.
My question is, do you know what I could write in the requirements file to get the cpu-only version of torch that is smaller, or alternatively, if the requirements.txt doesn't work for this, what I would do instead, to get the cpu version.
Per the Pytorch website, you can install pytorch-cpu with
conda install pytorch-cpu torchvision-cpu -c pytorch
You can see from the files on Anaconda cloud, that the size varies between 26 and 56MB depending on the OS where you want to install it.
You can get the wheel from http://download.pytorch.org/whl/cpu/.
The wheel is 87MB.
You can setup the installation by putting the link to the wheel in the requirements.txt file. If you use Python 3.6 on Heroku:
http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.whl
otherwise, for Python 2.7:
http://download.pytorch.org/whl/cpu/torch-0.4.1-cp27-cp27mu-linux_x86_64.whl
For example if your requirements are pytorch-cpu, numpy and scipy and you're using Python 3.6, the requirements.txt would look like:
http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.whl
numpy
scipy
As of PyTorch 1.3, PyTorch has changed its API. In order to install CPU version only, use
conda install pytorch torchvision cpuonly -c pytorch
And, the corresponding wheel files can be downloaded from https://download.pytorch.org/whl/torch_stable.html and can be installed using pip or use the command similar to the following corresponding to your intended pytorch and torchvision versions
On Linux:
pip3 install torch==1.9.0+cpu torchvision==0.10.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
On Windows / Mac:
pip3 install torch torchvision
Check the PyTorch's getting started guide.
In 2020, please use the following command if you want to download pytorch-cpu version with pip3 (on Linux and Win):
pip3 install torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
I'm getting errors for each version from list of torch stable versions. like
`{specific_version} is not a supported wheel on this platform
Try to put this into your requirements.txt
// requirements.txt
-f https://download.pytorch.org/whl/torch_stable.html
torch==1.8.1+cpu
torchvision==0.9.1+cpu
fastai>=2.3.1
ipywidgets
voila
If you want to install stable pytorch=1.4.0 cpu version using requirements.txt then specify direct download http link...
So that pip will download and install directly...
http://download.pytorch.org/whl/cpu/torch-1.4.0%2Bcpu-cp37-cp37m-linux_x86_64.whl
Alternatively if using terminal or cmd
torch==1.4.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
For more versions, visit
https://download.pytorch.org/whl/torch_stable.html and choose version as per your requirement (windows, linux, mac version all can be seen in the link)
The problem is the size of the libs, when you use an application locally you can use the GPU resources, as you will not use this on the server, use the following code in requirements.txt:
--find-links https://download.pytorch.org/whl/torch_stable.html
torch==1.11.0+cpu
--find-links https://download.pytorch.org/whl/torch_stable.html
torchvision==0.12.0+cpu
You can use pip to download the latest CPU-only pytorch wheel directly from the pytorch.org website:
pip install torch --extra-index-url https://download.pytorch.org/whl/cpu
Coming to this question after running into the same issue with Heroku's App platform -- slug size well over the 500MB limit. The current instruction from the official PyTorch "Getting Started" page is as follows:
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
That's for a Linux install, using the latest (1.13.1) stable version, in Python, with pip. Note: the syntax varies based on the system, package manager, language, and preferred build. (See below)
The correct answer is to search for it on the Pytorch website, here: https://pytorch.org/get-started/previous-versions/
They have a complete list of all of the previous versions, as well as the pip syntax to use for each one (including CPU-only versions).

'pip install duckling' giving issue

I want to install duckling to work on Rasa nlu training but it is giving issue as mentioned in below image please help me to fix this.enter image description here
As a suitable alternative would you consider running Rasa NLU in docker. The rasa/rasa_nlu:latest-full docker image includes duckling. If you wanted to get started with it and you have docker installed you should be able to just run
docker run rasa/rasa_nlu:latest-full
Then you can interact with Rasa over it's HTTP API https://github.com/RasaHQ/rasa_nlu#example-use
The docker file allow's more complicated setups like over writing the default config file and saving the logs/models/etc to a persistent disk
https://github.com/RasaHQ/rasa_nlu#advanced-docker
I had the same issue! This is basically the issue of jpype modules.
You need to first install the jpype in your environment.
you can use conda install -c conda-forge jpype1
For installing the jpype you can also refer to
official link for jpype installation.
On successful installation of jpype duckiling can be installed using
pip install duckling -U
I hope this would help!
You should download VC+ compiler for windows and then try the pip install.
You can download it here
https://download.microsoft.com/download/7/9/6/796EF2E4-801B-4FC4-AB28-B59FBF6D907B/VCForPython27.msi

Pip stuck on "Running command python setup.py egg_info" - no errors.

I run Vagrant on Windows 10 with VirtualBox,Xenial64 ubuntu to load TaigaIO via manual setup.
At pip install -vvv -r requirements-devel.txt part , pip hangs forever when it tries to install django-sampledatahelper.
When i try to install just this package, it shows same effect: no errors, not going back to bash, just hanging on:
Downloading from URL https://pypi.python.org/packages/2b/fe/e8ef20ee17dcd5d4df96c36dcbcaca7a79d6a2f8dc319f4e25107e000859/django-sampledatahelper-0.4.1.tar.gz#md5=a750d769af76d3f6e5791cfeb78832b0 (from https://pypi.python.org/simple/django-sampledatahelper/)
Running setup.py (path:/tmp/pip-build-pZcRoU/django-sampledatahelper/setup.py) egg_info for package django-sampledatahelper
Running command python setup.py egg_info
I tried fresh VM install, in virtualenv or without it, pip mirrors, removing cache and --no-cache option, xenial64 and bento/ubuntu-16.04 distros, with vagrant ssh and with Putty. Efect is the same.
I had the same issue and I run -vvv command. It seemed that pip had stopped, but I waited for a couple of minutes and the package successfully installed
It seems that there is something wrong with ubuntu Xenial64 distribution AND manual setup instructions. When i use bento/ubuntu-16.04 and setup-server.sh from taiga-scripts the installation is finishing correctly.
It was still downloading the package, but slowly. The inner pip script that setting up the egg_info used neither '-i' nor '--proxy' you passed to the outer pip to accelerate the installation.
You can use a global proxy (tun/tap or vpn) or just modify the pip script to force the inner setup to download the package in an accelerated way.

TensorFlow dependencies needed. How to run TensorFlow on Windows

I am interested in getting TensorFlow to run on Windows, however at present I realize that this is not possible due to some of the dependencies not being usable with Windows, e.g. Bazel.
The need arises because as I currently understand it the only way to access the GPU from TensorFlow is via a non-virtual install of Linux. I do realize I can dual boot into a Linux install, but would prefer to avoid that route.
To resolve the problem I am in need of the entire dependency chain to build TensorFlow as was wondering if this already existed.
I also realize that I can capture the build output when building from source as a solid start, but would like to avoid that work if it is already known.
There is a beta of Bazel that runs on Windows - https://github.com/dslomov/bazel-windows
See related GitHub Issue to run TensorFlow on Windows. - https://github.com/tensorflow/tensorflow/issues/17
Another reason to run on Windows is the possibility to port to Xbox One.
I found a possible answer, still need to check it. This will generate a dependency graph as a dot file.
$ bazel query 'deps(//tensorflow/tools/pip_package:build_pip_package)' --output graph > tensorflow.dependency.dot
There are now three main options for building and/or running TensorFlow on Windows:
You can install a GPU-enabled PIP package of TensorFlow 0.12rc0 from PyPI: pip install tensorflow-gpu
You can build the GPU-enabled PIP package yourself using the experimental CMake build. This also gives you the ability to work with TensorFlow in Visual Studio. The documentation for this build can be found here.
There is preliminary support for building TensorFlow using Bazel for Windows. However, we are still ironing out some bugs with this build.
This may not be exactly what you want one way to run TensorFlow under Windows is to install a virtual machine (VMWare player v12 is free to use for non-commercial) and then install Ubuntu in that and finally TensorFlow in Ubuntu. Works well for me.
Since the begin of 2017, Tensorflow is now officially supported on Windows and can be installed via pip:
pip install --upgrade tensorflow
pip install --upgrade tensorflow-gpu
or by fetching packages directly (pick the one that matches your needs, e.g. x64/gpu)
# x86 / CPU
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0-cp35-cp35m-win_x86_64.whl
# x64 / CPU
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.0.0-cp35-cp35m-win_amd64.whl
# x64 / GPU
pip install --upgrade https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-1.0.0-cp35-cp35m-win_amd64.whl

Resources