I deployed a dag in Airflow (on GCP) but I receive error "No module named 'scipy'".
How do I install packages in Airflow?
I've tried adding a separate DAG to run
def pip_install(package):
subprocess.call([sys.executable, "-m", "pip", "install", package])
def update_packages(**kwargs):
logging.info(list(sys.modules.keys()))
for package in PACKAGES:
pip_install(package)
I've tried writing pip3 install scipy on the shell of GCP;
I've tried adding pip install scipy to the image builder.
None of these approaches had any result.
If you are using Cloud Composer on GCP, you should check https://cloud.google.com/composer/docs/how-to/using/installing-python-dependencies
Pass a requirements.txt file to the gcloud command-line tool. Format the file with each requirement specifier on a separate line.
Sample requirements.txt file:
scipy>=0.13.3
scikit-learn
nltk[machine_learning]
Pass the requirements.txt file to the gcloud command to set your installation dependencies.
gcloud composer environments update ENVIRONMENT-NAME \\
--update-pypi-packages-from-file requirements.txt \\
--location LOCATION
Related
I am trying to run below commands but it's not working on my windows machine.
C:\Users\XXX\Desktop\python-7>pip install chalice -t .
ERROR: Can not combine '--user' and '--target'
C:\Users\XXX\Desktop\python-7>pip install --user --install-option="--prefix=" chalice -t .
ERROR: Can not combine '--user' and '--target'
Can someone please let me know if there is any alternative to get the module in the same directory ?
UPDATE
C:\Users\XXX\Desktop\python-7>pip install --target=C:\Users\XXX\Desktop\python-7 chalice
ERROR: Can not combine '--user' and '--target'
add --no-user at the end of the command and it should work
This happens with the Python version distributed on the Windows AppStore.
I believe it's because that version installs the basic executables under C:\Program Files\WindowsApps, a secured location that users cannot modify; pip is then aliased to actually run as something like pip --user C:\Users\<your_user>\AppData\Local\Packages\PythonSoftwareFoundation.Python.<version>_<id>\LocalCache\local-packages\Python<version>, so users will install packages to a writeable folder.
Hence, as soon as you try to add --target, pip breaks.
You can simply use:
C:\Users\XXX\Desktop\python-7>pip install chalice
I have pip installed many packages using the windows powershell from my python 37 window, but havent for a few months and now I am getting an error instead of an install.
I have tried installing two packages (pandas and numpy) and get the same results for both.
I tried switching pip and pandas, as well as pip and the file name (including extension) and received no favorable results. When I type in the name of the module it returns that there is no module with that name, when I type in the full file name for the module it tells me that numpy-1 does not exist.
As you will see in the next section the problem seems to be that the pypi.org format for pip installing seems to have changed when I wasn't paying attention.
my code (which has worked in the past) looks like this
py -3.7 -m pip install numpy-1.16.2-cp37-cp37m-win_amd64.whl
the error looks like this
PS C:\Users\Hezekiah\AppData\Local\Programs\Python\Python37> py -3.7 -m pip install numpy-1.16.2-cp37-cp37m-win_amd64.whl
C:\Users\Hezekiah\AppData\Local\Programs\Python\Python37\python.exe: No module named pip
I expect my pip install code to install numpy, instead it tells me that pip is not a module.
follow steps
1.open cmd
2.give full path to the script folder
e.g.
C:\Python37-32\Scripts
3.then try pip commands
pip install pandas
C:\Python37-32\Scripts>pip install pandas
I am using linux-datascience-svm VM provided on Azure in my batch GPU pool. At first I tried to pip install some libraries like so:
pip install --upgrade pip;
pip install docopt;
pip install pubnub;
pip install azure;
pip install glob2;
pip install theano>=0.8.2
pip freeze;
However when my application tries to import theano it gives a Module Not Found error for theano.
I tried leveraging Anaconda so I tried activating base environment in the pool start task, then running the following task cmdline:
/bin/bash -c "set -e;
source activate base;wait"
however I get the following error:
/bin/bash: line 1: activate: No such file or directory
I tried to put the conda environment activation statement in a bash script and running it but I get this error:
./run.sh: line 3: source: activate: file not found
How can I access my installed libraries like theano after they've been installed on the pool in conda or in the general environment?
Try replacing activate with the absolute path to the activate script in conda. It would look something like
source /data/username/miniconda2/bin/activate base
I am using module paramiko in my python code which is an aws lambda function.
I followed same procedure in python package deployment following the link http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python
I got some strange error after running the deployment package
I see you're following AWS documentation, but I'm not sure exactly how you are creating the deployment package, so I'll try to illustrate with an example.
My Python code(3.5)
/paramiko
/paramiko
my_function.py
requirements.txt
Where requirements.txt:
paramiko==2.3.1
my_function.py contains:
import paramiko
print(paramiko.__version__)
Creating the virtual environment.
Create the virtualenv: python3 -m venv /path/to/your/venv.
Navigate to the venv root, and activate it: source bin/activate.
Install dependencies: pip install -r requirements.txt
Execute the following shell commands from the root of your venv:
cd lib/python3.5/site-packages/
zip -r9 ~/my_deployment_package.zip *
cd /path/to/your/project/root
zip -g ~/my_deployment_package.zip *
You should have a deployment package, ~/my_deployment_package that contains all the dependencies for your project.
Using the serverless framework v1.0.0, I have a 'requirements.txt' in my service root with the contents being the list of dependant python packages. (e.g. requests).
However my resulting deployed function fails as it seems these dependencies are not installed as part of the packaging
'Unable to import module 'handler': No module named requests'
I assume it is serverless that does the pip install, but my resulting zip file is small and clearly its not doing it, either by design or my fault as I am missing something? Is it because its Lambda that does this? If so what am I missing?)
Is there documentation on what is required to do this and how it works? Is it serverless that pip installs these or on aws lambda side?
You need to install serverless-python-requirements and docker
$ npm install serverless-python-requirements
Then add the following to your serverless.yml
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
Make sure you have your python virtual environment active in CLI:
$ source venv/bin/activate
Install any dependencies with pip - note that in CLI you can tell if venv is active by the venv to the left of the terminal text
(venv) $ pip install <NAME>
(venv) $ pip freeze > requirements.txt
Make sure you have opened docker then deploy serverless as normal
$ serverless deploy
What will happen is that serverless-python-requirements will build you python packages in docker using a lambda environment, and then zip them up ready to be uploaded with the rest of your code.
Full guide here
Now you can use serverless-python-requirements. It works both for pure Python and libraries needing native compilation (using Docker):
A Serverless v1.x plugin to automatically bundle dependencies from requirements.txt and make them available in your PYTHONPATH.
Requires Serverless >= v1.12
The Serverless Framework doesn't handle the pip install. See https://stackoverflow.com/a/39791686/1111215 for the solution