I am trying to evaluate Consul Service discovery with Traefik as load balancer + reverse proxy.
I have run Consul on my local machine and registered service1. Also, I started Traefik on my local machine.
Now, I want Traefik to listen to Consul as backend. For this, I have put following in Traefik .toml
[consulCatalog]
endpoint = "localhost:8500"
watch = true
prefix = ""
domain = "consul.localhost"
With this, I can access service1 from Traefik as follows:
http://service1.consul.localhost:9090/
My question is that what should I put as URL if Traefik is running on some other host. Instead of localhost in above url, should I put actual IP? It does not seem to work.
Related
I created two sample application(tcp-server and tcp-client) for checking TCP connection in istio environment. Used the below link to create server and client in g0
https://www.linode.com/docs/guides/developing-udp-and-tcp-clients-and-servers-in-go/
Deployed the application in kubernetes cluster and tested without istio, it is working fine.
But after installing istio(demo configuration, followed this url to install istio: https://istio.io/latest/docs/setup/getting-started/),
and redeploying the apps to fill with envoy-proxy, the client is not connecting to server
Also using below command makes the server connect success
sh -c "echo world | nc 10.244.1.29 1234" is
What am I doing wrong?
Posting the solution I found.
Issue: I was trying to connect to server using ip address and nodePort, which some how not working in istio environment
Solution: From Client instead of providing ip address and nodePort of server node, give service name and container port of server app.
Extra Info: For using client from outside cluster, create gateway and virtual service for server. In your external client provide nodePort and ip address of istio-ingress pod as server destination
I am running airflow(version 1.10.10) webserver on EC2 behind AWS ELB
Here is the ELB listener configuration
Load Balancer Protocol : SSL Load Balancer
Port: 443 Instance
Protocol: TCP Instance
Port: 8080
Cipher omit here
SSL Certificate a cert here
in front of ELB , i configured a route53 and set a fqdn to the web server say: abc.fqdn
all the page loading are working, like
https://abc.fqdn/admin/ or
https://abc.fqdn/admin/airflow/tree?dag_id=tutorial
all the web form submission are working , like
Trigger DAG
however, after form submission, the page is forwarded to an http and the page did not load due to the elb listener.
I have to manually change to https such as https://admin/airflow/tree?dag_id=tutorial
here is what i did:
I read about this article: https://github.com/apache/incubator-superset/issues/978
then on the webserver ec2 , i found this file /usr/local/lib/python3.7/site-packages/airflow/www/gunicorn_config.py
and this example config : https://gist.github.com/kodekracker/6bc6a3a35dcfbc36e2b7
i added the following config and my config file looks like below
import setproctitle
from airflow import settings
secure_scheme_headers = {
'X-FORWARDED-PROTOCOL': 'ssl',
'X-FORWARDED-PROTO': 'https',
'X-FORWARDED-SSL': 'on'
}
forwarded_allow_ips = "*"
proxy_protocol = True
proxy_allow_from = "*"
def post_worker_init(dummy_worker):
setproctitle.setproctitle(
settings.GUNICORN_WORKER_READY_PREFIX + setproctitle.getproctitle()
)
However, the new configs above seems not working.
Did I do anything wrong? How to make may web node forward to https after form submission?
For https I used an ALB instead… Had to setup the Airflow web server to have a crt (generated self signed with the domain that will be used by the ALB) then serving on port 8443 (choose anything you like), then set the ALB to route https to the target group the webserver ASG instances are in for 8443; and you tell the ALB to use the signed cert that you already have (not the self signed that's on the instance) in your AWS account (probably).
Oh and change the baseURL to https schema.
I had trouble with the ELB because I was directing similarly 443 with cert in AWS account to 8080, but 8080 was unencrypted…
I'm using traefik v2 as gateway. I have a frontend container running with host https://some.site.com which powered by traefik.
Now I have a micro-service server with multi services and all of them are listening on 80 port. I want to serve the backend server on path https://some.site.com/api/service1, https://some.site.com/api/service2 ...
I have tried traefik.http.routers.service1.rule=(Host(some.site.com) && PathPrefix(/api/service1)) but not worked and traefik.http.middlewares.add-api.addprefix.prefix=/api/service1 not worked too;
How can I implement this?
Can you post your services' docker-compose configuration?
If you use middlewares, you may need to specify the service. Like
traefik.http.routers.service1.middlewares=add-api
traefik.http.middlewares.add-api.addprefix.prefix=/api/service1
I have so far configured servers inside Kubernetes containers that used HTTP or terminated HTTPS at the ingress controller. Is it possible to terminate HTTPS (or more generally TLS) traffic from outside the cluster directly on the container, and how would the configuration look in that case?
This is for an on-premises Kubernetes cluster that was set up with kubeadm (with Flannel as CNI plugin). If the Kubernetes Service would be configured with externalIPs 1.2.3.4 (where my-service.my-domain resolves to 1.2.3.4) for service access from outside the cluster at https://my-service.my-domain, say, how could the web service running inside the container bind to address 1.2.3.4 and how could the client verify a server certificate for 1.2.3.4 when the container's IP address is (FWIK) some local IP address instead? I currently don't see how this could be accomplished.
UPDATE My current understanding is that when using an Ingress HTTPS traffic would be terminated at the ingress controller (i.e. at the "edge" of the cluster) and further communication inside the cluster towards the backing container would be unencrypted. What I want is encrypted communication all the way to the container (both outside and inside the cluster).
I guess, Istio envoy proxies is what you need, with the main purpose to authenticate, authorize and encrypt service-to-service communication.
So, you need a mesh with mTLS authentication, also known as service-to-service authentication.
Visually, Service A is your Ingress service and Service B is a service for HTTP container
So, you terminate external TLS traffic on the ingress controller and it will go further inside the cluster with Istio mTLS encryption.
It's not exactly what you asked for -
terminate HTTPS traffic directly on Kubernetes container
Though it fulfill the requirement-
What I want is encrypted communication all the way to the container
I created a custom HTTPS LoadBalancer (details) and I need my Kubernetes Workload to be exposed with this LoadBalancer. For now, if I send a request to this endpoint I get the error 502.
When I choose the Expose option in the Workload Console page, there are only TCP and UDP service types available, and a TCP LoadBalancer is created automatically.
How do I expose a Kubernetes Workload with an existing LoadBalancer? Or maybe I don't even need to do it, and requests don't work because my instances are "unhealthy"? (healthcheck)
You need to create a kubernetes ingress.
First, you need to expose the deployment from k8s, for a https choose 443 port and service type can be either: LoadBalance(external ip) or ClusterIp. (you can also test that by accesing the ip or by port forwarding).
Then you need to create the ingress.
Inside yaml file when choosing the backend, set the port and ServiceName that was configured when exposing the deployment.
For example:
- path: /some-route
backend:
serviceName: your-service-name
servicePort: 443
On gcp, when ingress is created, there will be a load balancer created for that. The backends and instance groups will be automatically build too.
Then if you want to use the already created load balancer you just need to select the backend services from the lb that was created by ingress and add them there.
Also the load balancer will work only if the health checks pass. You need to use the route that will return a 200 HTTPS response for that.