I have an etcd cluster using TLS for security. I want other machines to use etcd proxy, so the localhost clients don't need to use TLS. Proxy is configured like this:
[Service]
Environment="ETCD_PROXY=on"
Environment="ETCD_INITIAL_CLUSTER=etcd1=https://master1.example.com:2380,etcd2=https://master2.example.com:2380"
Environment="ETCD_PEER_TRUSTED_CA_FILE=/etc/kubernetes/ssl/ca.pem"
Environment="ETCD_PEER_CERT_FILE=/etc/kubernetes/ssl/worker.pem"
Environment="ETCD_PEER_KEY_FILE=/etc/kubernetes/ssl/worker-key.pem"
Environment="ETCD_TRUSTED_CA_FILE=/etc/kubernetes/ssl/ca.pem"
And it works, as far as the first connection goes. But the etcd client does an initial query to discover the full list of servers, and then it performs its real query against one of the servers in that list:
$ etcdctl --debug ls
start to sync cluster using endpoints(http://127.0.0.1:4001,http://127.0.0.1:2379)
cURL Command: curl -X GET http://127.0.0.1:4001/v2/members
got endpoints(https://1.1.1.1:2379,https://1.1.1.2:2379) after sync
Cluster-Endpoints: https://1.1.1.1:2379, https://1.1.1.2:2379
cURL Command: curl -X GET https://1.1.1.1:2379/v2/keys/?quorum=false&recursive=false&sorted=false
cURL Command: curl -X GET https://1.1.1.2:2379/v2/keys/?quorum=false&recursive=false&sorted=false
Error: client: etcd cluster is unavailable or misconfigured
error #0: x509: certificate signed by unknown authority
error #1: x509: certificate signed by unknown authority
If I change the etcd masters to --advertise-client-urls=http://localhost:2379, then the proxy will connect to itself and get into an infinite loop. And the proxy doesn't modify the traffic between the client and the master, so it doesn't rewrite the advertised client URLs.
I must not be understanding something, because the etcd proxy seems useless.
Turns out that most etcd clients (locksmith, flanneld, etc.) will work just fine with a proxy in this mode. It's only etcdctl that behaves differently. Because I was testing with etcdctl, I thought the proxy config wasn't working at all.
If etcdctl is run with --skip-sync, then it will communicate through the proxy rather than retrieving the list of public endpoints.
etcdctl cluster-health ignores --skip-sync and always touches the public etcd endpoints. It will never work with a proxy.
With option --endpoints "https://{YOUR_ETCD_ADVERTISE_CILENT_URL}:2379".
Because you configured TLS for etcd, you should add options --ca-file, --cert-file, --key-file.
Related
I am looking for a simple solution to start a squid-like proxy server that supports username/password authentication.
It should be able to tunnel HTTPS requests using CONNECT.
A docker-based solution is
docker run --rm -it -p 3128:8080 mitmproxy/mitmproxy mitmdump --set proxyauth=user:pass
The --ignore-hosts option also enables TLS pass-through for stuff like mTLS and certificates not signed by a trusted root (i.e. via. mkcert or self-signed).
docker desktop on mac is getting error:
Unable to connect to the server: x509: certificate signed by unknown authority
The following answers didn't helped much:
My system details:
Operating system: macOS Big Sur Version 11.6
Docker desktop version: v20.10.12
Kubernetes version: v1.22.5
When I do:
kubectl get pods
I get the below error:
Unable to connect to the server: x509: certificate signed by unknown authority
Posting the answer from comments
As appeared after additional questions and answers, there was a previous installation of rancher cluster which left its traces: certificate and context in ~/.kube/config.
The solution in this case for local development/testing is to delete entirely ~/.kube folder with configs and init the cluster from the scratch.
If you are using a corporate laptop, and everything you do goes through a proxy, you will get this message. Hence, when docker desktop tries to connect to the server defined in ~/.kube/config, it will try to go through the proxy and you will need the cert issued by the company. Long story short, you are getting blocked by the the company... To fix, you can add the no proxy props, adding what ever value server: internal.docker defined in~/.kube/config . Meaning, if I am connecting to docker cluster which runs locally in my laptop, do not direct my traffic through proxy.
When doing docker info, after setting no proxy, you should see something like this.
docker info | grep -i proxy
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal,localhost,127.0.0.1,.local,.us.example.com,.examplecorp.com,.examplevcn.com,kubernetes.docker.internal
hubproxy.docker.internal:5000
I'm new to Amazon Web Service (AWS).
I already created a PostgreSQL from AWS RDS:
Endpoint: database-1.XXX.rds.amazonaws.com
Port: 5432
Public accessibility: Yes
Availablity zone: ap-northeast-1c
After that, I will push my application that using the database to AWS (maybe deploy to EKS).
However, I want to try testing the database server from my local computer first.
I haven't tried testing from my laptop PC at home yet, but I think it will connect OK because my laptop PC is not using the HTTP proxy to connect to the network.
The problem is that I want to try testing from my company PC, which needs setup the HTTP proxy to connect to the internet. The PC spec:
Windows 10
Installed PostgreSQL 10
Firstly, I tried using psql command-line:
psql -h database-1.XXXX.rds.amazonaws.com -U postgre
> Unknown host
set http_proxy=http://user:password#my_company_proxy:3128
set https_proxy=http://user:password#my_company_proxy:3128
psql -h database-1.XXXX.rds.amazonaws.com -U postgre
> Unknown host
set http_proxy=http://my_second_company_proxy:3128
set https_proxy=http://my_second_company_proxy:3128
psql -h database-1.XXXX.rds.amazonaws.com -U postgre
> Unknown host
Then, I tried using the pgAdmin tool.
As from the internet post, it said that we can use "SSH Tunnel" for inputing proxy:
However, the error message will be shown:
So, anyone can help suggest if we can connect to the public PostgreSQL server through HTTP proxy?
I think problem is Postgres uses plain TCP/IP protocol and you are trying to use HTTP proxy. Also you're trying to create SSH tunnel against your HTTP proxy server which won't work.
So I'd suggest following solutions:
Use TCP proxy instead of HTTP proxy
Create an EC2 or any instance that has SSH access from your company network and has access to public internet. So that you can create SSH tunnel through that instance to achieve your goal.
NOTE: Make sure you PostgreSQL is accessible from public internet (although this is usually bad idea, but it's out of scope this question) sometimes security group configs prevent it to connect from public internet.
Just add all ports(5432,3128...) in the Security Group from your RDS and specify your IP. Don't forget "/32"
Let me add that "unknown host" is usually an indication that you're not resolving the DNS hostname. Also, your HTTP proxy should not interfere with connections to databases since they aren't on port 80 or 443. A couple of things you can try (assuming you're on windows) sub in your actual url:
nslookup database-1.XXXX.rds.amazonaws.com
telnet database-1.XXXX.rds.amazonaws.com 5432
You should also check the security group that is attached to your RDS and make sure you've opened up the ip address that you're originating from on port TCP/5432.
Lastly check that your VPC has DNS and Hostnames enabled. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating
Hi i would like to expose my elasticsearch cluster in kubernetes created using ECK (https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html) so it can be accessed externally.
I have a requriement to setup Functionbeat to ship aws lambda cloudwatch logs to elastcsearch.
Please see Step 2: Connect to the Elastic Stack https://www.elastic.co/guide/en/beats/functionbeat/current/functionbeat-installation-configuration.html
Attempt:
I have an elastic load balancer that has haproxy running on it which i use to expose other k8 services externally such as frontends. Ive attempted to modify this to also allow me to expose elasticsearch.
haproxy
frontend elasticsearch
bind *:9200
acl host_data_elasticsearch hdr(host) -i elasticsearch.acme.com
use_backend elasticsearchApp if host_data_elasticsearch
backend elasticsearchApp
server data-es data-es-es-http:9200 check rise 1 ssl verify none
Im attempting to see if i can connect using the following curl command:
curl -u "elastic:$ELASTIC_PASSWORD" -k "https://elasticsearch.acme.com:9200"
However i get the following error:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
In the browser if i navigate to the url i get
This site can’t provide a secure connection
elasticsearch.acme.com sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
Posting answer as community wiki based on #Joao Morais comment:
you added ssl to the server line which instructs haproxy to perform a ssl offload and you didn't add the ssl stuff in the frontend. it seems you should either remove the ssl+verify from the server, add ssl to the front or query a plain http request.
Additional information:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number indicates that you are trying to reach website that is not secure.
To access it you should replace https: with http: in your curl command so it will look like this:
curl -u "elastic:$ELASTIC_PASSWORD" -k "http://elasticsearch.acme.com:9200"
I am trying to play around with kubernetes and specifically the REST API. The steps to connect with the cluster API are listed here. However Im stuck in the first step i.e. running kubectl proxy
I try running this:
kubectl --context='vagrant' proxy --port=8080 &
which returns error: couldn't read version from server: Get https://172.17.4.99:443/api: dial tcp 172.17.4.99:443: i/o timeout
What does this mean? How do overcome it connect to the API?
Check that your docker, proxy, kube-apiserver, kube-control-manager services are running without error. Check their status using systemclt status your-service-name. If the service is loaded but not running then restart the service by using systemctl restart your-service-name.