how to use envoyloadbalancer to route Multiple domains on same public IP - proxy

here is my architecture
architecture
i want to bind multiple domains to same IP address
by exemple when i enter foo.com in my browser i see webapp1
and when i type bar.com in my broswer i find webapp2 .
for that i have two webapp :
webapp1 on ip 1111:5000
webapp2 on ip 1111:6000
here is my envoy version
envoy version: d362e791eb9e4efa8d87f6d878740e72dc8330ac/1.18.2/clean-getenvoy-76c310e-envoy/RELEASE/BoringSSL
and here is my config envoy.yaml :
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: foo.com
domains:
- "foo.com"
routes:
- match:
prefix: "/"
route:
cluster: service_foo
- name: bar.com
domains:
- "bar.com"
routes:
- match:
prefix: "/"
route:
cluster: service_bar
http_filters:
- name: envoy.router
typed_config: {}
clusters:
- name: service_foo
connect_timeout: 1.00s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: service_foo
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 1.1.1.1
port_value: 5000
ipv4_compat: true
- name: service_bar
connect_timeout: 1.00s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: service_bar
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 1.1.1.1
port_value: 6000
ipv4_compat: true
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
when i enter in my browser foo.com work but bar.com does not work.
What is the issue please help me .

I did the same Test with a slight difference in the YAML config file.
I think each service endpoint must be the internal Ip address (private).
Here is an example: I have two web apps running on docker: start on port 3000 and blog on 8080. docker image here: https://hub.docker.com/r/ang67/blog and https://hub.docker.com/r/ang67/getting-started
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: start.com
domains:
- "start.com"
routes:
- match:
prefix: "/"
route:
cluster: service_start
- name: blog.com
domains:
- "blog.com"
routes:
- match:
prefix: "/"
route:
cluster: service_blog
http_filters:
- name: envoy.filters.http.router
typed_config: {}
clusters:
- name: service_start
connect_timeout: 1.00s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: service_start
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.17.0.1
port_value: 3000
ipv4_compat: true
- name: service_blog
connect_timeout: 1.00s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: service_blog
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 172.17.0.1
port_value: 8080
ipv4_compat: true
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
Run the envoy on a container:
docker run --rm -it \
-v $(pwd)/envoy-custom.yaml:/envoy-custom.yaml \
-p 9901:9901 \
-p 80:80 \
envoyproxy/envoy-dev:2e6db8378477a4a63740746c5bfeb264cd76bc34 \
-c /envoy-custom.yaml
Run:
curl -H "Host: start.com" http://localhost
curl -H "Host: blog.com" http://localhost
or do mapping in your etc/hosts for start.com and blog.com in order to launch in a browser

Related

Can reverse proxy for both grpc and grpc-web with envoy?

I have grpc server, web application and mobile application. With web application, I used envoy proxy for reverse from grpc web to grpc server by my domain. But this only grpc web can connect to my server over envoy, my application run with grpc cannot connect it. I want to connect to my gprc server over my domain with both grpc-web and grpc. Any one can help me explain and solve this issue. Thanks every one
This is my envoy setup
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 9090 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: echo_service
timeout: 0s
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: echo_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: server
port_value: 8080
Not sure what is the exact question? Currently you need to use Envoy to translate grpc-web to grpc.

How to configure envoy proxy for 2 grpc services?

I am using a envoy proxy for grpc-web and everything was working fine with one service but now I am registering other services I ran into problems. I thought a routed configuration would work but when I try to hit the endpoint it gives me a DNS resolution failed in bloomRPC.
Should I just move to a sidecar envoy mesh configuration? I've been avoiding this because it adds complexity to development, but in my research to fix this problem it came up a lot.
I'm running it with docker-compose in MacOS Catalina.
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9991 }
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 9911
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: "/service1"
route:
cluster: service1
- match:
prefix: "/service2"
route:
cluster: service2
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET,PUT,DELETE,POST,OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout,access-token
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message,access-token
allow_credentials: true
http_filters:
- name: envoy.filters.http.grpc_http1_bridge
typed_config: { }
- name: envoy.filters.http.grpc_web
typed_config: { }
- name: envoy.filters.http.cors
typed_config: { }
- name: envoy.filters.http.router
typed_config: { }
clusters:
- name: service1
connect_timeout: 0.25s
type: strict_dns
dns_lookup_family: V4_ONLY
http2_protocol_options: { }
lb_policy: round_robin
load_assignment:
cluster_name: service1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal #0.0.0.0
port_value: 50101
- name: service2
connect_timeout: 0.25s
type: strict_dns
dns_lookup_family: V4_ONLY
http2_protocol_options: { }
lb_policy: round_robin
load_assignment:
cluster_name: service2
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal #0.0.0.0
port_value: 50102

How to successfully route envoy to my second service?

I am trying to deploy 2 services by using the envoy front proxy configuration from envoy github page
My first service is the main site which should work under “/” and the second service is back office administration that should work under “/admin”. The problem starts when I declare the prefix of my first service as “/”. After that Envoy doesn’t route traffic to my admin service at all.
my front-envoy.yaml is:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- “*”
routes:
- match:
prefix: “/”
route:
cluster: service1
- match:
prefix: “/admin”
route:
cluster: service2
http_filters:
- name: envoy.router
config: {}
clusters:
- name: service1
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: service1
port_value: 80
- name: service2
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: service2
port_value: 80
admin:
access_log_path: “/dev/null”
address:
socket_address:
address: 0.0.0.0
port_value: 8001
Please advice.
The problem is you have "/" as the first prefix matcher, "/" as a prefix will match with all the requests, it will match "/" requests as well as the "/admin" requests. Change the order of your matches to have "/admin" first and then "/". It should work fine.

Envoy INVALID_ARGUMENT:static_resources.clusters[0].hosts[0]: invalid name url: Cannot find field

I'm using Istio pilot-agent proxy in OpenShift cluster.
I have an error (INVALID_ARGUMENT:static_resources.clusters[0].hosts[0]: invalid name url: Cannot find field....
Config:
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
stat_prefix: egress_http
use_remote_address: true
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local-services
domains: ["*"]
routes:
- match: { prefix: "/service-a" }
route: { cluster: service-a }
http_filters:
- name: envoy.router
clusters:
- name: service-a
connect_timeout: 0.25s
# dns_lookup_family: V4_ONLY
lb_policy: round_robin
type: strict_dns
hosts:
- url : tcp://service-a.apps-stage.vm.mos.cloud.sbrf.ru:80
From what I can tell with Envoy, the error "Cannot find field" means that you requested a field name (in this case, url) in a data structure, but Envoy doesn't support that field name in that data structure.
The "hosts" block, in your example, would look like:
hosts:
- socket_address:
address: "service-a.apps-stage.vm.mos.cloud.sbrf.ru"
port_value: 80

Configuration of Istio-Envoy with Consul

I'm trying to build a service mesh with Istio. Currently I have a Docker-Compose with two REST-services and one sidecar (Envoy) for each. If you send a HTTP-request to serviceA, it is forwarded to serviceB, which returns the result. It works fine. Because the control plane is not implemented yet, the "mesh" is realized using the Envoy-configuration. The config files look as following:
ServiceA:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
routes:
- match:
prefix: "/abc"
route:
prefix_rewrite: "/v1/abc"
host_rewrite: serviceB
cluster: service_ner
http_filters:
- name: envoy.router
config: {}
clusters:
- name: service_abc
connect_timeout: 0.25s
type: logical_dns
dns_lookup_family: V4_ONLY
lb_policy: round_robin
hosts:
- socket_address:
address: serviceB
port_value: 80
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8081
Service B:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: local_service
http_filters:
- name: envoy.router
config: {}
clusters:
- name: local_service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: 127.0.0.1
port_value: 5001
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8081
Now I want to implement Istio as control plane and I'm trying to replace a part of the configuration using the Istio-Pilot. Is it possible to overwrite the route rules from the Envoy config files by implementing Istio-route rules? For example:
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ruleSerA
spec:
destination:
name: serviceA
precedence: 2
route:
- labels:
version: v1
---
apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: ruleSerB
spec:
destination:
name: serviceB
precedence: 2
match:
request:
headers:
uri:
prefix: /abc
rewrite:
uri: /v1/abc
route:
- labels:
version: v1
What else do I need to configure to connect the data plane with the service plane? Until now the Istio-route rules did not affect the services at all.
Best regards, Martin

Resources