argocd repo add not using proxy - proxy

I am running Argocd in a private cluster in AKS and I am trying to connect to a git repo that is behind a proxy. When I try to add the repo using the --proxy flag it doesn't seem to try to connect via proxy. The domain doesn't even resolve and when I swap out with the IP address the connection just timeouts. I am not convinced the repo add command is using the proxy command at all.
I have validated that I can access the proxy with no issue but for some reason argocd repo add isn't using it.
Maybe there is someone out there who has run across a similar issue?
With url (not actual)
argocd repo add ssh://git#xyz.com:7999/clouds.git --proxy http://proxy:3128 --ssh-private-key-path ~/.ssh/id_ed25519
FATA[0000] rpc error: code = Unknown desc = error testing repository connectivity: dial tcp: lookup xyz.com on 10.0.0.10:53: no such host
argocd repo add ssh://git#xxx.xxx.xxx.xxx:7999/clouds.git --ssh-private-key-path ~/.ssh/id_ed25519 --proxy http://proxy:3128
FATA[0060] rpc error: code = DeadlineExceeded desc = context deadline exceeded

Related

kubectl giving error: Unable to connect to the server: x509: certificate signed by unknown authority

docker desktop on mac is getting error:
Unable to connect to the server: x509: certificate signed by unknown authority
The following answers didn't helped much:
My system details:
Operating system: macOS Big Sur Version 11.6
Docker desktop version: v20.10.12
Kubernetes version: v1.22.5
When I do:
kubectl get pods
I get the below error:
Unable to connect to the server: x509: certificate signed by unknown authority
Posting the answer from comments
As appeared after additional questions and answers, there was a previous installation of rancher cluster which left its traces: certificate and context in ~/.kube/config.
The solution in this case for local development/testing is to delete entirely ~/.kube folder with configs and init the cluster from the scratch.
If you are using a corporate laptop, and everything you do goes through a proxy, you will get this message. Hence, when docker desktop tries to connect to the server defined in ~/.kube/config, it will try to go through the proxy and you will need the cert issued by the company. Long story short, you are getting blocked by the the company... To fix, you can add the no proxy props, adding what ever value server: internal.docker defined in~/.kube/config . Meaning, if I am connecting to docker cluster which runs locally in my laptop, do not direct my traffic through proxy.
When doing docker info, after setting no proxy, you should see something like this.
docker info | grep -i proxy
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal,localhost,127.0.0.1,.local,.us.example.com,.examplecorp.com,.examplevcn.com,kubernetes.docker.internal
hubproxy.docker.internal:5000

How to set a proxy on bosh-cli

I'm trying to upload a bosh release into the director. I use a Virtualbox environment and I'm behind a corporate proxy.
Even when I've tried to set the proxy with
export https_proxy=http://myproxy:3128
or with
export BOSH_ALL_PROXY=http://myproxy:3128
I never manage to do any download
Does someone know how to do ?
MBP-de-Olivier:bosh-deployment olivier$ bosh -e vbox upload-release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=283
Using environment '192.168.50.6' as client 'admin'
Task 13
Task 13 | 15:28:45 | Downloading remote release: Downloading remote release (00:00:05)
L Error: Failed to open TCP connection to bosh.io:443 (Address family not supported by protocol - socket(2) for "bosh.io" port 443)
Task 13 | 15:28:50 | Error: Failed to open TCP connection to bosh.io:443 (Address family not supported by protocol - socket(2) for "bosh.io" port 443)
Have you tried to download the release locally and upload to your bosh director from local host.
I think you should add the following lines:
export http_proxy=http://yourproxy:3128
export https_proxy=http://yourproxy:3128
Are you sure that 3128 is the correct port? This seems as if you are using cntlm (or another similar local proxy). If it is a local proxy: Is the service running? Can the service connect to the corporate proxy?
My guess is that the BOSH director does not know it must use a proxy. I'm under the impression you tried to configure the proxy at the bosh-cli level, but the download is performed by the director itself.
You could try to re-deploy the director with your proxy configuration. You can use this ops file in order to do so.

error during connect: ... http: server gave HTTP response to HTTPS client on any docker command with remote host

I'm trying to connect to remote docker host through ssh tunnel. I have forwarded the 2375 port and I'm trying to connect to it by specifying DOCKER_HOST.
$ DOCKER_API_VERSION=1.24 DOCKER_HOST=localhost:2375 docker ps
error during connect: Get https://localhost:2375/v1.24/containers/json: http: server gave HTTP response to HTTPS client
This have worked before, but I can't make it work again, because my docker client keep giving me back this error. I can't make it ignore the https/http stuff. The connection is OK, i can curl the endpoints just fine, its just that docker client is doing something and then preventing itself from connecting and I don't know how to make it ignore the https.
I have finally figured out why I was getting this error. I was positive DOCKER_TLS_VERIFY was not set, but it was. So if anyone get this error, make sure the env variable is undefined or that the value is empty.
using
$ DOCKER_API_VERSION=1.24 DOCKER_HOST=localhost:2375 DOCKER_TLS_VERIFY= docker ps
did work as expected.

etcd2 in proxy mode doesn't do anything useful

I have an etcd cluster using TLS for security. I want other machines to use etcd proxy, so the localhost clients don't need to use TLS. Proxy is configured like this:
[Service]
Environment="ETCD_PROXY=on"
Environment="ETCD_INITIAL_CLUSTER=etcd1=https://master1.example.com:2380,etcd2=https://master2.example.com:2380"
Environment="ETCD_PEER_TRUSTED_CA_FILE=/etc/kubernetes/ssl/ca.pem"
Environment="ETCD_PEER_CERT_FILE=/etc/kubernetes/ssl/worker.pem"
Environment="ETCD_PEER_KEY_FILE=/etc/kubernetes/ssl/worker-key.pem"
Environment="ETCD_TRUSTED_CA_FILE=/etc/kubernetes/ssl/ca.pem"
And it works, as far as the first connection goes. But the etcd client does an initial query to discover the full list of servers, and then it performs its real query against one of the servers in that list:
$ etcdctl --debug ls
start to sync cluster using endpoints(http://127.0.0.1:4001,http://127.0.0.1:2379)
cURL Command: curl -X GET http://127.0.0.1:4001/v2/members
got endpoints(https://1.1.1.1:2379,https://1.1.1.2:2379) after sync
Cluster-Endpoints: https://1.1.1.1:2379, https://1.1.1.2:2379
cURL Command: curl -X GET https://1.1.1.1:2379/v2/keys/?quorum=false&recursive=false&sorted=false
cURL Command: curl -X GET https://1.1.1.2:2379/v2/keys/?quorum=false&recursive=false&sorted=false
Error: client: etcd cluster is unavailable or misconfigured
error #0: x509: certificate signed by unknown authority
error #1: x509: certificate signed by unknown authority
If I change the etcd masters to --advertise-client-urls=http://localhost:2379, then the proxy will connect to itself and get into an infinite loop. And the proxy doesn't modify the traffic between the client and the master, so it doesn't rewrite the advertised client URLs.
I must not be understanding something, because the etcd proxy seems useless.
Turns out that most etcd clients (locksmith, flanneld, etc.) will work just fine with a proxy in this mode. It's only etcdctl that behaves differently. Because I was testing with etcdctl, I thought the proxy config wasn't working at all.
If etcdctl is run with --skip-sync, then it will communicate through the proxy rather than retrieving the list of public endpoints.
etcdctl cluster-health ignores --skip-sync and always touches the public etcd endpoints. It will never work with a proxy.
With option --endpoints "https://{YOUR_ETCD_ADVERTISE_CILENT_URL}:2379".
Because you configured TLS for etcd, you should add options --ca-file, --cert-file, --key-file.

How to push to heroku behind a proxy?

I am using git behind a proxy server at my university. While trying to execute
git push heroku master
I get an error
ssh: connect to host proxy.heroku.com port 22: Bad file number
fatal: The remote end hung up unexpectedly
I had a similar problem when pushing to git earlier, but that was solved using their smart HTTP. From what I've read so far, it seems to be a network problem. How do I fix this? Is there any way to push to heroku using HTTP? (I'm guessing pushing through SSH is causing this problem and that the port 22 is blocked)
Corkscrew is a tool for tunneling SSH through HTTP proxies
Setting up Corkscrew with SSH/OpenSSH is very simple. Adding
the following line to your ~/.ssh/config file will usually do
the trick (replace proxy.example.com and 8080 with correct values):
ProxyCommand /usr/local/bin/corkscrew proxy.example.com 8080 %h %p
Follow http://www.agroman.net/corkscrew/README
Heroku only supports git pushes over SSH (port 22) - it's likely that your university is preventing outbound port 22 access which causes your push to fail.

Resources