I am on Windows 11 and using Ubuntu 20.04.5 . I recently (3 days ago) updated my Windows, which I think could be related to this problem but also might not be. Specifically, I am suspicious of the update labelled
Windows Subsystem for Linux WSLg Preview - 1.0.27
Which I installed 3 days ago.
This is my own personal computer.
Pip: pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
This is my first question and I am mostly just the type of person who wants to use pip and python and not worry about it. So if I forgot important background/version info just tell me how to get it and I will try to add it.
Anytime I try to install a package with pip, I get:
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=120)")': /simple/pytorch/
Including if I use
--default-timeout=1000
Then it just waits 1000s of my time instead of 15. My internet speed seems fine for everything else.
Here is what I have tried so far by reading everything I could find after googling the error (nothing changed at all):
Uninstalling and reinstalling pip
Disabling IPv6
Disabling as well as completely uninstalling the VPN i had
Restarting computer
Restarting Router (including hard reset)
Restarting Modem
Disabling Windows firewall (temporarily just to see if it would fix it, reenabled afterwards).
Clearing display variable
Unset DISPLAY
and including in pip install
--no-cache-dir
Updating things
sudo apt update && sudo apt upgrade
If I try
sudo pip install pytorch -vvv
Then I get
Non-user install because site-packages writeable
Created temporary directory: /tmp/pip-ephem-wheel-cache-b2g02nuo
Created temporary directory: /tmp/pip-req-tracker-5hiikgit
Initialized build tracking at /tmp/pip-req-tracker-5hiikgit
Created build tracker: /tmp/pip-req-tracker-5hiikgit
Entered build tracker: /tmp/pip-req-tracker-5hiikgit
Created temporary directory: /tmp/pip-install-j_2ikmsi
1 location(s) to search for versions of pytorch:
* https://pypi.org/simple/pytorch/
Fetching project page and analyzing links: https://pypi.org/simple/pytorch/
Getting page https://pypi.org/simple/pytorch/
Found index url https://pypi.org/simple
Getting credentials from keyring for https://pypi.org/simple
Getting credentials from keyring for pypi.org
Looking up "https://pypi.org/simple/pytorch/" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): pypi.org:443
It hangs there until hitting the timeout.
If I try:
curl https://pypi.python.org/simple/ -v
Then I get
* Trying 151.101.188.223:443...
* TCP_NODELAY set
* Connected to pypi.python.org (151.101.188.223) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* Operation timed out after 300157 milliseconds with 0 out of 0 bytes received
* Closing connection 0
curl: (28) Operation timed out after 300157 milliseconds with 0 out of 0 bytes received
If I try
ping pypi.org
Then I get...
PING pypi.org (151.101.128.223) 56(84) bytes of data.
64 bytes from 151.101.128.223 (151.101.128.223): icmp_seq=1 ttl=55 time=10.5 ms
64 bytes from 151.101.128.223 (151.101.128.223): icmp_seq=2 ttl=55 time=11.7 ms
64 bytes from 151.101.128.223 (151.101.128.223): icmp_seq=3 ttl=55 time=116 ms
64 bytes from 151.101.128.223 (151.101.128.223): icmp_seq=4 ttl=55 time=11.2 ms
64 bytes from 151.101.128.223 (151.101.128.223): icmp_seq=5 ttl=55 time=11.8 ms
Related
I've tried
conda install -c conda-forge pyperclip
and get the following error:
Solving environment: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://repo.anaconda.com/pkgs/main/noarch/repodata.json.bz2>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
If your current network has https://www.anaconda.com blocked, please file
a support request with your network engineering team.
SSLError(MaxRetryError('HTTPSConnectionPool(host=\'repo.anaconda.com\', port=443): Max retries exceeded with url: /pkgs/main/noarch/repodata.json.bz2 (Caused by SSLError("Can\'t connect to HTTPS URL because the SSL module is not available."))'))
I've also try to download pyperclip-1.7.0.tar.gz, unzip it (the folder is pyperclip-1.7.0) and copy it to multiple folders (e.g. ...\Continuum\anaconda3\Scripts and ...\Continuum\anaconda3\Lib)
but when I try to import it, I get a "ModuleNotFoundError: No module named 'pyperclip'" message
pip install pyperclip
worked from the Anaconda prompt (I was using CMD)
We're currently running Hortonworks 2.6.5.0:
$ hadoop version
Hadoop 2.7.3.2.6.5.0-292
Subversion git#github.com:hortonworks/hadoop.git -r 3091053c59a62c82d82c9f778c48bde5ef0a89a1
Compiled by jenkins on 2018-05-11T07:53Z
Compiled with protoc 2.5.0
From source with checksum abed71da5bc89062f6f6711179f2058
This command was run using /usr/hdp/2.6.5.0-292/hadoop/hadoop-common-2.7.3.2.6.5.0-292.jar
The OS is CentOS 7:
$ cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
We recently started noticing these issues in the ambari-agent's log file:
$ grep -i "error|warn" /var/log/ambari-agent/*
/var/log/ambari-agent/ambari-agent.log:WARNING 2018-07-30 14:03:50,982 NetUtil.py:124 - Server at https://hbase26-2.mydom.com:8440 is not reachable, sleeping for 10 seconds...
/var/log/ambari-agent/ambari-agent.log:ERROR 2018-07-30 14:04:00,986 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
/var/log/ambari-agent/ambari-agent.log:ERROR 2018-07-30 14:04:00,990 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
/var/log/ambari-agent/ambari-agent.log:WARNING 2018-07-30 14:04:00,990 NetUtil.py:124 - Server at https://hbase26-2.aa.mydom.com:8440 is not reachable, sleeping for 10 seconds...
/var/log/ambari-agent/ambari-agent.log:ERROR 2018-07-30 14:04:10,993 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
/var/log/ambari-agent/ambari-agent.log:ERROR 2018-07-30 14:04:10,994 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
/var/log/ambari-agent/ambari-agent.log:WARNING 2018-07-30 14:04:10,994 NetUtil.py:124 - Server at https://hbase26-2.aa.mydom.com:8440 is not reachable, sleeping for 10 seconds...
/var/log/ambari-agent/ambari-agent.log:ERROR 2018-07-30 14:04:20,996 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:579)
/var/log/ambari-agent/ambari-agent.log:ERROR 2018-07-30 14:04:20,997 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
When these started occurring we could no longer manage any aspects of the Hadoop cluster through Ambari. All the services showed little yellow question marks and said "heartbeat lost".
Multiple restarts would not allow us to resume Ambari, and ultimately regain control our cluster.
This issue turned out to be due to the server's inability to deal with TLSv1.1 certificates when it was attempting to connect to the CA service on port 8440.
We noticed that the service was in fact running:
$ netstat -tapn|grep 8440
tcp 0 0 0.0.0.0:8440 0.0.0.0:* LISTEN 1203/java
But curl's to this would fail, unless we disabled TLS checks via the --insecure switch. This was our first clue that it appeared to be something related to TLS.
Further investigations led us to NetUtil.py (part of Ambari) which seemed OK. Other leads include:
$ cat /etc/ambari-agent/conf/ambari-agent.ini
...
[security]
ssl_verify_cert = 0
...
And this:
$ grep -E '\[https|verify' /etc/python/cert-verification.cfg
[https]
#verify=platform_default
verify=disable
None of which worked. What did ultimately work is this, Forcing ambari-agent to use TLSv1.2 vs. TLS1.1:
$ grep -E "\[security|force" /etc/ambari-agent/conf/ambari-agent.ini
[security]
force_https_protocol=PROTOCOL_TLSv1_2
And then restarting, ambari-agent restart.
I was able to piece this all together using wisps of hints scattered all over the Internet. I'm putting this here in the hopes it will help any other poor souls that have this happen to their Hadoop/Hortonworks cluster.
References
Ambari agent- [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed
Java/Python Updates and Ambari Agent TLS Settings
Openssl error upon host registration
Cleaning up Ambari Metrics System Data
Why did this happen?
Further debugging/digging I found this thread titled: Disabling TLSv1 & TLS1.1 - Enabling TLSv1.2. It's apparently mandatory that you now configure your Ambari Agent's to use TLSv1.2.
All starts one day that my pc command line programs stop working, everyone has an error related to SSL, for example:
CURL: "failed to verify the legitimacy of the server ca bundle"
NPM: "UNABLE_TO_VERIFY_LEAF_SIGNATURE"
MONGODB: MongoError: connection 0 to xxx.mlab.com:xxx timed out
I am using the same code with mongo on an ec2 instance working ok, but local is not, i am using:
Windows 10
CURL 7.58
NPM 3.10
NODE 6.11.2
MONGODB 2.2.32
Any ideas?
I am trying to set up the hadoop environment following the instructions listed in the book Hadoop For Dummies on a virtual machine.
One of step indicates the following code -
yum install hadoop\* mahout\* oozie\* hbase\* pig\* hive\* zookeeper\* hue\*
When I run that I get the following error -
[root#localhost Desktop]# yum install hadoop\*
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: centos.mirror.crucial.com.au
* extras: centos.mirror.crucial.com.au
* updates: centos.mirror.nsw.au.glovine.com.au
base | 3.7 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
Setting up Install Process
No package hadoop* available.
Error: Nothing to do
Among all hadoop, zookeeper and hue I got the error saying the package not found. I looked into those mirror sites and I do see that hadoop is not present. Is there any way to force the mirror to some other location?
Edit -
As pointed out below I did try the command to get the repo with the following command -
wget -O /etc/yum.repos.d/bigtop.repo http://archive.apache.org/dist/bigtop/bigtop-1.0.0/repos/centos6/bigtop.repo
which is throwing following Connection Refused error -
[root#localhost Desktop]# wget -O /etc/yum.repos.d/bigtop.repo http://www.apache.org/dist/bigtop/bigtop-1.0.0/repos/centos6/bigtop.repo
--2015-12-30 05:03:09-- http://www.apache.org/dist/bigtop/bigtop-1.0.0/repos/centos6/bigtop.repo
Resolving www.apache.org... 88.198.26.2, 140.211.11.105, 2a01:4f8:130:2192::2
Connecting to www.apache.org|88.198.26.2|:80... failed: Connection refused.
Connecting to www.apache.org|140.211.11.105|:80... failed: Connection refused.
Connecting to www.apache.org|2a01:4f8:130:2192::2|:80... failed: Network is unreachable.
Likewise I did try CDH one-click install as pointed out by user1862493 and I am getting the following error
[root#localhost Desktop]# wget https://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm
--2015-12-30 05:07:49-- https://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm
Resolving archive.cloudera.com... 23.235.41.167
Connecting to archive.cloudera.com|23.235.41.167|:443... failed: Connection refused.
yum update worked fine and so is internet within the VM, any help?
You need to add the repository first.
wget https://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm
yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm
yum clean all
Then try to install hadoop components.
Ref http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cdh_ig_cdh5_install.html#topic_4_4_1_unique_2__p_31_unique_2
I am writing a bash script for the public (and myself) to set up a Puppet / TheForeman server real quick on a Ubuntu 14.04 LTS server, which is based on this HOWTO. I have done nothing on the part of DNS.
ping $(hostname -f)
PING foreman.test.local (192.168.1.2) 56(84) bytes of data.
64 bytes from foreman.test.local (192.168.1.2): icmp_seq=1 ttl=64 time=0.035 ms
64 bytes from foreman.test.local (192.168.1.2): icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from foreman.test.local (192.168.1.2): icmp_seq=3 ttl=64 time=0.054 ms
64 bytes from foreman.test.local (192.168.1.2): icmp_seq=4 ttl=64 time=0.054 ms
After running the bash script I expect the test to give good results, but instead I get an error message:
sudo puppet agent --test
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Error 400 on SERVER: Failed to find foreman.test.local via exec: Execution of '/etc/puppet/node.rb foreman.test.local' returned 1:
Info: Retrieving plugin
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed when searching for node foreman.test.local: Failed to find foreman.test.local via exec: Execution of '/etc/puppet/node.rb foreman.test.local' returned 1:
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
When you read it seems like an option to choose
sudo apt-get install puppetmaster -y
but that is the problem and doesn't work in this case.
So in this case it should be
wget apt.puppetlabs.com/puppetlabs-release-trusty.deb
and
dpkg -i puppetlabs-release-trusty.deb
instead!