`ddev get --list` doesn't work (lookup api.github.com: i/o timeout) - ddev

I need to add Solr to a DDEV project but am encountering errors when attempting to gather information about available services.
I'm following guidance here:
https://ddev.readthedocs.io/en/stable/users/extend/additional-services/
When I attempt to list all available services: ddev get --list, I receive this response after approx 30 seconds:
Failed to list available add-ons: Unable to get list of available services: Get "https://api.github.com/search/repositories?q=topic:ddev-get+fork:true+org:drud": dial tcp: lookup api.github.com: i/o timeout
I'm not sure what the problem is. If I curl the URL from the error message, ie curl https://api.github.com/search/repositories?q=topic:ddev-get+fork:true+org:drud, I receive a JSON response from Github with information about the repository.
This has happened for over two days now. I may be overlooking something but am not sure what, exactly. I'm able to run DDEV projects using the standard installation (mariadb, nginx, nodejs, mailhog) but continue to run into errors re listing add-ons.
I have ddev v.1.21.4 installed.
I'm using an M1 Mac on macOS 13.1.
Thank you.

Your system is unable to do a DNS lookup of the hostname api.github.com, and this is happening on your macOS host. Are you able to ping api.github.com? Have you tried rebooting?
You may want to temporarily disable firewall, VPN, virus checker to see if that changes things. But you'll want to be able to get to where you can ping api.github.com.
There is an obscure golang problem on macOS affecting situations where people have more than one DNS server, so that could be it if you're in that category. You also might want to consider changing the DNS server for your system to 1.1.1.1, as this can sometimes be a problem with your local DNS server (but of course the fact that you can curl the URL argues against that).

Related

Unable to connect to GitHub for 2 years, irregardless of network - any ideas?

Toku here.
I've been without GitHub for 2 years now, and I'm getting sick of it.
In terms of web connection, I get a basic "This site can't be reached" - ERR_CONNECTION_REFUSED.
Attempting to use the desktop app results in more information
Image of 443 port error - more connection refusals.
Image of curl command result
I'm running Win10 Home build 19044, connected to a private home network with no restrictions. What would be causing this?
Tried reconfiguring DNS, and it simply didn't change anything.
At the suggestion of FiddlingAway, I resynced my system time, to no effect.
For reference, I am living in the US, so there should be no issues connecting due to a countrywide ban. The web interface does not load, even on a VPN.

I can access my heroku free subdomain site via a browser, but I can't ping to it in the terminal

I have this site at heroku that I am trying to ping. It says
Ping request could not find host https://doc-hero.herokuapp.com/. Please check the name and try again.
even though I can still open the site on a browser. I thought it was an issue with heroku so I hosted it again on pythonanywhere but it still won't ping.
I'm sure there is a simple explanation for this but I tried Google and still no luck.
Is there a way I can get it to ping?
You might be wondering why I want it to do that when I can still browse to it. Well, it's because I need it to be visible to Open AI so that it can access some documents over there.
remove https:// prefix. ping is a different protocol and requires DNS only not protocol prefix.
heroku server still will probably not answer ping request as it is not required to.

Can someone elaborate on the necessary proxy settings in the install-config.yaml file for an OKD installation in an air-gapped environment?

I am attempting an installation of OKD 4.5 in a restricted (i.e. air-gapped) environment. I am running into an issue during the installation process where-in, as far as I can tell, the bootstrap machine is attempting and failing to access the mirrored registry I have running.
Based on my research, I believe this issue is stemming from a lack of proxy settings within the install-config.yaml file as described in the documentation here, however I am having trouble wrapping my brain around what functions I'm attempting to accommodate by adding this proxy information into the configuration and exactly what information I should be adding. I haven't been able to find any other segments of the documentation that go into details about this either (however if someone can simply point me in the direction of such documentation, that would be extremely helpful).
Would anyone be willing to explain to me what values should be going into the proxy lines in this file and why? Does this information replace, compliment, or require changes in any way to the networking segment of the configuration?
As a related question, do I need to change any of the networking subnet values to reflect my local network? In all examples I've seen the clusterNetwork.cidr and serviceNetwork subnets are the same as the documentation (cidr: 10.128.0.0/14, serviceNetwork: - 172.30.0.0/16), and some include an additional machineNetwork field. Is this field something I should be adding and if so, should I just be including my subnet for this field?
As context for my specific scenario, here are my environment specifications as well as the specific errors I am getting:
OKD Release: `4.5.0-0.okd-2020-10-15-235428`
Environment: Virtualized Bootstrap, Master and Worker nodes, in virt-manager, running on Centos 7 in
air-gapped environment. This host machine contains the install directory and also provides DNS,
Apache Server, HAProxy for load balancing and the mirrored registry.
Errors:
From <log-bundle>/bootstrap/journals/release-image.log:
localhost.localdomain release-image-download.sh[114151]: Error: Error initializing source docker://okd-services.okd.local:5000/okd#sha256<.....>:
error pinging docker registry okd-services.okd.local:5000: Get "https://okd-services.okd.local:5000/v2/":
dial tcp <okd-services.okd.local ip>:5000: connect: connection refused
From systemctl status named (several requests to IPs I don't recognize which seem to be NTP requests):
network unreachable resolving '2.fedora.pool.ntp.org.okd/AAAA..
network unreachable resolving './NS/IN': 199.7.91.13#53
etc
I have ensured that host-node and node-node communication is present, and that the registry is accessible from the nodes ( to test, I netcat the certificate pem into a node and update its trusts, then curl -u the registry using https://fqdn:5000/v2/_catalog), so I am fairly certain all the connections are established properly.
To conclude, since I'm fairly sure that the proxy/network settings in the install-config.yaml file are to blame, and since I am unable to find more elaboration on these configurations in the official docs or elsewhere, I would very much appreciate any in-depth explanation of how I should be configuring this for an air-gapped environment. Additionally, if anyone believes that another issue is the cause, any input regarding that would be great.

Docker login from cmd line yields i/o timeouts

I'm trying to login to docker from the Docker Quickstart Terminal but it doesn't work.
I always get an error saying: "Error response from daemon: Server Error: Post https://index.docker.io/v1/users/: dial tcp: i/o timeout"
I have been working with docker the last few days and was always able to login and push/pull stuff. The only thing different I can think of is that I'm currently on a different Wifi.
Any help is greatly appreciated, thanks.
Karsten
This is a known issue, and is probably related to VirtualBox; after switching networks, boot2docker/VirtualBox sometimes looses connectivity or uses incorrect DNS settings.
DNS confused when switching WiFi networks
Container connectivity lost after switching wireless networks
Manual restart required everytime network connectivity changes (contains some workarounds)
Docker Machine 0.5.3 adds two new options that may circumvent this;
--virtualbox-dns-proxy
--virtualbox-host-dns-resolver
After restarting the machine it worked again, don't know what caused the problem though...

My IP seems to be blocked by web hosting server

I have a strange problem, I just installed my php web site on a shared hosting, all services were working fine. But after configuring my app I just could visit my web site only once, other attempts gives:
"The server is taking too long to respond.".
But from other IP i can access, but only once, it seems all ip addressess beeing blocked after first visit(even ftp and other services get down, no access at all from the IP), can anyone help to explore this problem ? I don't think that it's my app problem, the app works fine on my local PC.
Thanks.
First thing to try would be a traceroute to determine where your traffic is being blocked.
In a windows command prompt:
tracert www.yoursharedhostingserver.com
At the moment, trying to access this address gives this:
Fatal error: Class 'mainController'
not found in
/home/myicms/public_html/core/application/crApplication.class.php
on line 181
I have tried it multiple times and it didn't block me. It might be that You have already solved this problem.
As far as I know, the behavior described by You could only be explained by a badly configured intelligent firewall. It may have been misconfigured by Your host.
If You visit a site at a certain host and suddenly You cannot access an ftp on this host, then it's either a (really bad) firewall or a (very mean) site that explicitly adds a firewall rule to ignore that address.
Some things that You might look into:
It might be something with identd too. What was the service You have configured on Your host? Was it by any chance any kind of server-controll panel (that might have an ability to controll a firewall)?
Is the blockade permanent, or does it go off after 24h, or does it only go off after rebooting the server? Does restarting some services makes the blockade go off?
Did You install any software that "protects Your server from portscanning"? It might be a bit too aggressive.
I wish You good luck in finding a source of this problem!
Chances are that if you can access it once that its actually working. The problem is more than likely in the php code than in the server.

Resources