An error happens while installing tensorflowjs, it's ok to install other packages, just this package ends up with failure.
I tried to pip install --user pyspider, failed.
I upgraded pip version, still failed.
I installed tf_nightly module first and run 'pip install tensorflowjs', still failed.
C:\Users\Jingyi>pip install tensorflowjs
Collecting tensorflowjs
Using cached https://files.pythonhosted.org/packages/79/29/35e1aa467436ff46b98df65a08c49faaedb3429e1c512d1d90fe308040a0/tensorflowjs-1.0.1-py3-none-any.whl
Collecting numpy==1.15.1 (from tensorflowjs)
Using cached https://files.pythonhosted.org/packages/fb/7d/f8b97d97809f184d90faf320fa8e2e7eac994844c5e6c57adbed1283e9e9/numpy-1.15.1-cp36-none-win_amd64.whl
Collecting six==1.11.0 (from tensorflowjs)
Using cached https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Collecting tf-nightly-2.0-preview>=2.0.0.dev20190304 (from tensorflowjs)
Using cached https://files.pythonhosted.org/packages/4c/13/8fa7c91176d299759487d90ab201256941b43a48ecbf033a2a726f4dafce/tf_nightly_2.0_preview-2.0.0.dev20190509-cp36-cp36m-win_amd64.whl
ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: 'C:\\Users\\Jingyi\\AppData\\Local\\Temp\\pip-install-4ytziwpr\\tf-nightly-2.0-preview\\tf_nightly_2.0_preview-2.0.0.dev20190509.data/purelib/tensorflow/include/tensorflow/include/external/eigen_archive/unsupported/Eigen/CXX11/src/Tensor/TensorSyclConvertToDeviceExpression.h'
I expect to install tensorflowjs successfully.
It was caused by windows path length limitation, you can try below:
Hit the Windows key, type gpedit.msc and press Enter.
Navigate to Local Computer Policy > Computer Configuration > Administrative Templates > System > Filesystem.
Double click the Enable Win32 long paths option and enable it.
Then restart computer and it works
Original answer here
You can create a virtualenv and install tensorflow within this.
First create a virtualenv:
python3 -m venv envname
Next you have to activate it
source envname/bin/activate
Then install the package
pip install pyspider
When you use a virtualenv you don't nedd the --user flag.
Using a virtualenv is always a better way when you are working locally with python projects.
When I run:
pip install cairocffi==0.9.0
I get:
...
Download error on https://pypi.org/simple/: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852) -- Some packages may not be found!
...
The error occurs due to some SSL certificates missing. To solve this I ran:
pip install certifi==2017.4.17
My python version is "Python 3.6.3" and pip version is "pip 9.0.1" on windows OS. I am running this in a proxy network and have already setup pip.ini inside pip folder inside python installation directory.
below is the content of my file
[global]
proxy = http://username:password#1#proxy:port
trusted-host = pypi.python.org
pypi.org
files.pythonhosted.org
I am getting 403 forbidden error for all pip install commands, even for "pip install --upgrade pip" I am getting the same. Please suggest any solution.
Any help will be appreciated!!
Pip always fails ssl even when I do pip install dedupe or pip install --trusted-host pypi.python.org dedupe
The output is always the same no matter what:
Collecting dedupe
Retrying (Retry(total=4, connect=None, read=None,
redirect=None, status=None)) after connection broken by
'SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate
verify failed (_ssl.c:777)'),)': /simple/dedupe/
Retrying...
skipping
Could not find a version that satisfies the requirement dedupe (from versions: ) No matching distribution found for dedupe
So I uninstalled anaconda and reinstalled it. Same thing.
Do you think the problem is that my _ssl.c file (which I have no idea where it is) must be corrupt or something? Why would pip need to reference that if I'm telling it to bypass ssl verification anyway?
It may be related to the 2018 change of PyPI domains.
Please ensure your firewall/proxy allows access to/from:
pypi.org
files.pythonhosted.org
So you could give a try to something like:
$ python -m pip install --trusted-host files.pythonhosted.org --trusted-host pypi.org --trusted-host pypi.python.org [--proxy ...] [--user] <packagename>
Please see $ pip help install for the --user option description (omit if in a virtualenv).
The --trusted-host option doesn't actually bypass SSL/TLS, but allows to mark host as trusted when (and only when) it does not have valid (or any) HTTPS. It shouldn't really matter with PiPY because pypi.org (formerly pypi.python.org) does use HTTPS and there is CDN in front of it which always enforces TLSv1.2 handshake requirement regardless of the connecting pip client options.. But if you had your own local mirrors of pypi.org with HTTP-only access, then --trusted-host could be handy. Oh, and if you are behind a proxy, please also make sure to also specify: --proxy [user:passwd#]proxyserver:port
Some corporate proxies may even go as far as to replace the certificates of HTTPS connections on the fly. And if your system clock is out of sync, it could break SSL verification process as well.
If firewall / proxy / clock isn't a problem, then check SSL certificates being used in pip's SSL handshake. In fact, you could just get a current cacert.pem (Mozilla's CA bundle from curl) and try it using the pip option --cert:
$ pip --cert ~/cacert.pem install --user <packagename>
where --cert argument is system path to your alternate CA bundle in PEM format. (regarding the --user option, please see below).
Or, it's possible to create a custom config ~/.pip/pip.conf and point the option at a valid system cert (or your cacert.pem) as a workaround, for example:
[global]
cert = /etc/pki/tls/external-roots/ca_bundle.pem
(or another pem file)
It's even possible to manually replace the original cacert.pem found in pip with your trusty CA bundle (if your pip is very old for example). Older pip versions knew to fallback between pip/_vendor/requests/cacert.pem and system stores like /etc/ssl/certs/ca-certificates.crt or /etc/pki/tls/certs/ca-bundle.crt in case of cert issues, but in recent pip it's no longer the case, as it seems to rely solely on pip/_vendor/certifi/cacert.pem
Basically, pip package uses requests which uses urllib3 which, among other things, verifies SSL certificates; and all of them are shipped (vendored) within pip, along with the certifi package (also included, since pip 9.0.2) that provides current CA bundle (cacert.pem file) required for TLS verification. Requests itself uses urllib3 and certifi internally, and before 9.0.2, pip used cacert.pem from requests or the system. What it all means is that actually updating pip may help fix the CERTIFICATE_VERIFY_FAILED error, particularly if the OS and pip were deployed long ago:
The OP used anaconda, so they could try:
$ conda update pip - because issues can arise if conda and pip are both used together in the same environment. If there's no pip version update available, they could try:
$ conda config --add channels conda-forge; conda update pip
Alternatively, it's possible to use conda alone to directly install / manage python packages: it is a tool completely separate from pip, but provides similar features in terms of package and venv management. Its packages come not from PyPI, but from anaconda's own repositories.
The problem is, if you mix both and run conda after pip, the former can overwrite and break packages (and their dependencies) installed via pip, and render it all unusable. So it's recommended to only use one or the other, or, if you have to, use only pip after conda (and no conda after pip), and only in isolated conda environments.
On normal Linux Python installations without conda:
If you are using a version of pip supplied by your OS distribution, then use vendor-supplied upgrades for a system-wide pip update:
$ sudo apt-get install python-pip or: $ sudo yum install python27-pip
Some updates may not be readily available because distros usually lag behind PyPI. In this case, it's possible to upgrade pip at your user level (right in your $HOME dir), or inside a virtualenv, like:
$ python -m pip install --user --trusted-host files.pythonhosted.org --trusted-host pypi.org --trusted-host pypi.python.org --upgrade pip
(omit --user if in a virtualenv)
The --user switch will upgrade pip only for the current user (in your home ~/.local/lib/) rather than for the whole OS, which is a good practice to avoid interfering with the system python packages. It's enabled by default in a pip distributed in recent Ubuntu/Fedora versions. Be aware of how to solve ImportError if you don't use this option and happen to overwrite the OS-level system pip.
Alternatively (also at a user level) you could try:
$ curl -LO https://bootstrap.pypa.io/get-pip.py && python get-pip.py --user
The PyPA script contains a wrapper that extracts the .pem SSL bundle from pip._vendor.certifi.
Otherwise, if still no-go, try running pip with -vvv option to add verbosity to the output and check if there is now another SSLError caused by tlsv1 alert protocol version.
This worked for me, try this:
pip install --trusted-host=pypi.org --trusted-host=files.pythonhosted.org --user {name of whatever I'm installing}
My way is a simplification of #Alex C's answer:
python -m pip install --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --trusted-host pypi.org --upgrade pip
I experienced the same issue because I have Zscaler (a cloud security software) installed and was causing:
URL host for python packages being blocked
invalid SSL certificate warnings popping up
SSL inspection certificate not trusted
As mentioned by others, the below will fix individual package installations. pypi.python.org is not required since it has been replaced by pypi.org.
pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org <package to install>
I permanently fixed the issue by creating pip.ini file (pip.conf in Unix) and adding the below:
[global]
trusted-host = pypi.python.org
pypi.org
files.pythonhosted.org
See pip configuration files for how to locate your pip.ini, or where to put it if you need to create one.
The error above or one like it was caused by the virtual machine (VM) not be time synchronized, my guest Ubuntu VM was several days in the past.
I ran this commend to get the VM to pick up the correct network time:
sudo timedatectl set-ntp on
This makes the Ubuntu guest OS get the network time. (You may have to provide a network time source... I used this article: Digital Ocean - How to set time on Ubuntu)
Check the time is correct:
timedatectl
Re-run the failing pip command.
I'm trying to connect to an EC2 instance using fabric (in python). I've set my env variables as so:
env.hosts = ['xxx-xxx.amazonaws.com']
env.user = "ubuntu"
env.key_filename = ['/path/to/my/ec2.pem']
the command
run('pwd')
gives the following error:
File "build/bdist.linux-x86_64/egg/paramiko/client.py", line 242, in connect
File "build/bdist.linux-x86_64/egg/paramiko/transport.py", line 346, in start_client
ValueError: CTR mode needs counter parameter, not IV
I'm using paramiko 1.14.0 (current) btw, and editing my ssh config to associate the pem to the host is not an option (although, I have tested the connection with ssh -i /path/to/pem and that was fine). Has anyone else had this problem and solved it?
I had the same error running a Python/Paramiko script on a new Ubunutu host. I wasn't able to determine the cause of the fault as I am new to Python but I resolved it by removing paramiko and its dependancies from /usr/local/lib/python2.7/dist-packages. I removed paramiko, pycrypto and ecdsa.
My system already has the following packages:
sudo apt-get install python-pip
sudo apt-get install python-dev
I re-installed paramiko with:
sudo pip install paramiko
I was able to run my script successfully without the the ValueError:
Versions of modules I am running:
ecdsa 0.11
paramiko 1.14.0
pycrypto 2.6.1