I'm, having trouble to set up my local dev environment. I want to use Docker Desktop in my Windows 10 machine behind a local CNTLM proxy.
My CNTLM Proxy works. This is the output after I start my CNTLM
0 [main] cntlm 17620 find_fast_cwd: WARNING: Couldn't compute FAST_CWD pointer. Please report this problem to
the public mailing list cygwin#cygwin.com
section: global, Username = 'pd03056'
section: global, Domain = 'Provinzial'
section: global, Proxy = '192.168.10.10:80'
section: global, NoProxy = 'localhost, 127.0.0.*, 10.*, 192.168.*'
section: global, Listen = '0.0.0.0:3128'
Resolve 0.0.0.0:
-> 0.0.0.0
cntlm: Proxy listening on 0.0.0.0:3128
Adding no-proxy for: 'localhost'
Adding no-proxy for: '127.0.0.*'
Adding no-proxy for: '10.*'
Adding no-proxy for: '192.168.*'
cntlm: Workstation name used: L00265511WP
cntlm: Using following NTLM hashes: NTLMv2(1) NT(0) LM(0)
Password:
cntlm: PID 17620: Cntlm ready, staying in the foreground
I confimed that this works from a local WSL instance doing a curl www.google.de. I have a local ~/.curlrc file pointing curl to my CNTLM.
$ more ~/.curlrc
-x 127.0.0.1:3128
cacert = /path/to/my/trusted-certs.pem
$ curl www.google.de
# I skip the whole output here ...
# The CNTLM log shows that it communicates with google.de
# and I ger the correct result in WSL
Now here is my problem ... My Docker Desktop installation (version 4.4.4 [73704]) should use this proxy as well. I configured Docker Desktop to do so by putting http://host.docker.internal:3128 as HTTP and HTTPS proxy in Settings -> Resources -> Proxies. But I always get an error.
docker run from my Windows terminal results in this message
PS C:\Users\pd03056> docker run -it --rm hello-world:latest
Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": remote error: tls: handshake failure.
See 'docker run --help'.
The CNTLM log does not state anything either. So Docker Desktop does not use my CNTLM! host.docker.internal resolves to a correct IP of my workstation. Putting 127.0.0.1 or localhost instead of host.docker.internal does not change anything.
Anyone got an idea why my Docker Desktop installation does not pick up the Proxy config?
docker desktop on mac is getting error:
Unable to connect to the server: x509: certificate signed by unknown authority
The following answers didn't helped much:
My system details:
Operating system: macOS Big Sur Version 11.6
Docker desktop version: v20.10.12
Kubernetes version: v1.22.5
When I do:
kubectl get pods
I get the below error:
Unable to connect to the server: x509: certificate signed by unknown authority
Posting the answer from comments
As appeared after additional questions and answers, there was a previous installation of rancher cluster which left its traces: certificate and context in ~/.kube/config.
The solution in this case for local development/testing is to delete entirely ~/.kube folder with configs and init the cluster from the scratch.
If you are using a corporate laptop, and everything you do goes through a proxy, you will get this message. Hence, when docker desktop tries to connect to the server defined in ~/.kube/config, it will try to go through the proxy and you will need the cert issued by the company. Long story short, you are getting blocked by the the company... To fix, you can add the no proxy props, adding what ever value server: internal.docker defined in~/.kube/config . Meaning, if I am connecting to docker cluster which runs locally in my laptop, do not direct my traffic through proxy.
When doing docker info, after setting no proxy, you should see something like this.
docker info | grep -i proxy
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal,localhost,127.0.0.1,.local,.us.example.com,.examplecorp.com,.examplevcn.com,kubernetes.docker.internal
hubproxy.docker.internal:5000
Installed harbor on a host. Using plain HTTP protocol.
The IP is 192.168.33.10.
I can login it from harbor server itself:
sudo docker login 192.168.33.10
And can access it from browser:
http://192.168.33.10
But can't login it from other client(Mac, installed docker use it). The error message is:
docker login 192.168.33.10
Username: user1
Password: (my_password)
Error response from daemon: Get https://192.168.33.10/v2/: dial tcp 192.168.33.10:443: getsockopt: connection refused
From Harbor documentation there has this notice:
https://github.com/vmware/harbor/blob/master/docs/installation_guide.md
IMPORTANT: The default installation of Harbor uses HTTP - as such, you will need to add the option --insecure-registry to your client's Docker daemon and restart the Docker service.
Both the harbor host and client host set /etc/docker/daemon.json:
{ "insecure-registries":["192.168.33.10"] }
and restarted docker. However, it not works.
If don't setup harbor under HTTPS protocol now, is there a way to access it from client correctly?
Solution
It's unnecessary to set /etc/docker/daemon.json on client. Mac has another way:
Apply and Restart
The full error message I'm getting is:
Attempting to renew cert from /etc/letsencrypt/renewal/somedomain.com.conf produced an unexpected error: Problem binding to port 443: Could not bind to IPv4 or IPv6.. Skipping.
This is running on an AWS ubuntu 14.04 instance. All ports are open outgoing and 443 is open incoming.
You just need to stop all running servers like Apache, nginx or OpenShift before doing this.
Stop Nginx
sudo systemctl stop nginx
Stop Apache2
sudo systemctl stop apache2
you probably run the script with (preconfigurated) --standalone when your server is already running at port 443.
You can stop server before renew and start them after.
man says:
--apache Use the Apache plugin for authentication & installation
--standalone Run a standalone webserver for authentication
--nginx Use the Nginx plugin for authentication & installation
--webroot Place files in a server's webroot folder for authentication
--manual Obtain certificates interactively, or using shell script hooks
If I run renew with --apache I can't get any error.
As hinted in the other answers, you need to pass the option for your running webserver, for example:
Without webserver param:
sudo certbot renew
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges:tls-sni-01 challenge for example.com
Cleaning up challenges
Attempting to renew cert (example.com) from /etc/letsencrypt/renewal/example.com.conf produced an unexpected
error:
Problem binding to port 443: Could not bind to IPv4 or IPv6..
Skipping.
Then, again with the webserver param (success):
sudo certbot renew --nginx
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges: tls-sni-01 challenge for example.com
Waiting for verification...
Cleaning up challenges
new certificate deployed with reload of nginx server; fullchain is
/etc/letsencrypt/live/example.com/fullchain.pem
Congratulations, all renewals succeeded. The following certs have been
renewed: /etc/letsencrypt/live/example.com/fullchain.pem (success)
[This is specifically for ubuntu]
Login as root user to your server
Stop your server using the following command (for nginx)
service nginx stop
Then renew your certificate
certbot renew
Start your server
service nginx start
[TIP] To check the expiry date of your renewed certificate, enter the command below
ssl-cert-check -c [Path_to_your_certificate]/fullchain.pem
For example
ssl-cert-check -c /etc/letsencrypt/live/[your_domain_name]/fullchain.pem
Or
ssl-cert-check -c /etc/letsencrypt/live/[your_domain_name]/cert.pem
If you don't have ssl-cert-check already installed in your server, install it using
apt install ssl-cert-check
Note: The certificate can be renewed only if it is not expired. If it is expired, you have to create new one.
For NodeJS/PM2 users
I was using PM2 for my NodeJS service and when trying to renew the certificate I also got the "Problem binding to port 80: Could not bind to IPv4 or IPv6." error message.
As mentioned in above answers for Apache/Ngnix, Stopping my service and then trying to renew solved the problem.
pm2 stop all
sudo certbot renew
pm2 start all
First you need to install NGiNX lets encrypt plugin (if you work with NGiNX):
sudo apt install python-certbot-nginx
Then you can safely run:
sudo certbot renew --nginx
and it will work.
Note: certbot should already be installed.
For ngnix
sudo certbot renew --nginx
This happened because you used --standalone. The purpose of that option is to launch a temporary webserver because you don't have one running.
Next time use the --webroot method, and you'll be able to use your already running nginx server.
Borrowing from #JKLIR Simply run
/etc/letsencrypt/letsencrypt-auto renew --apache >> /var/log/letsencrypt/renew.log
to renew the ssl certificate
If you're trying to perform the certbot command as a regular user, you may not have access to bind to port 80 and other lower ports. If this is the case, you can grant python access to bind via:
First, see if you can find python 3+ (adjust as needed)
echo "$(readlink -f "$(which python3)")"
Allow python to open port 80 as a regular user (adjust as needed)
sudo setcap CAP_NET_BIND_SERVICE=+eip "$(readlink -f "$(which python3)")"
Re-run the failing certbot command.
Important: On Ubuntu 18.04, Python is called python3. It may be called a number of different things depending on the OS and how you obtained certbot. This command WILL VARY between OSs.
Warning: These lower ports are restricted for good reason. There are security considerations with the setcap command. You may read more about them here: https://superuser.com/a/892391
I use Nginx and needed to stop the server before I can proceed. Then I run the command:
$ sudo ./certbot-auto certonly --standalone -d chaklader.ddns.net
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for chaklader.ddns.net
Waiting for verification...
Cleaning up challenges
Subscribe to the EFF mailing list (email: xxx.chakfffder#gmail.com).
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/cdddddder.ddns.net/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/chaklader.ddns.net/privkey.pem
Your cert will expire on 2045-01-10. To obtain a new or tweaked
version of this certificate in the future, simply run certbot-auto
again. To non-interactively renew *all* of your certificates, run
"certbot-auto renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
I had a similar issue when I was running two websites (hosts) on a single instance. I stopped Nginx and then ran sudo certbot certonly --standalone --preferred-challenges http -d domain.com -d www.domain.com. After restarting Nginx everything started to work fine.
I'm learning Chef (12.10.24) and am trying to build a cookbook with recipes for provisioning machines that I'll do Ruby development on.
I'm trying to use knife bootstrap to set up my laptop as a node but am getting a connection error that I'm not sure how to get around. Here is the output:
➜ chef-repo$ knife bootstrap localhost -yN my-macbook-pro -p 2200 -x david -P [password]
Creating new client for my-macbook-pro
Creating new node for my-macbook-pro
Connecting to localhost
ERROR: Network Error: Connection refused - connect(2) for 127.0.0.1:2200
Check your knife configuration and network settings
Connecting to chef-server is fine but I can't connecting to localhost. Any suggestions about what I might be doing wrong?
I neglected to mention that I am using OSX El Capitan. It turns out that the ssh daemon isn't on by default in OSX.
Turning it on in System Preferences > Sharing (check Remote Login) fixed the problem.