Istio egressgateway doesn't collect metrics with label `reporter=destination` - metrics

when app send a request to external service entry via egress gateway, the traffict should like
app -> egreegateway -> service entry
but however, the kiali didn't show the traffic from app to egressgateway
I checked that this is because of istio gateway proxy only collect metrics which has label reporter=source, but for a sidecar proxy, it also collect metrics with label reporter=destination except reporter=source
Anyone knows that is it possible to make gateway proxy collect metrics with reporter=destination too?
Many thanks!

Related

How to make subrequests on a single route nginx

I have an Nginx reverse proxy listening on port 80 and a lot of services running over other ports or subpaths in our scenario. The outside requests are proxied to the right service / API / server based on the requested route.
Like this:
Some requests are just to check the environment health and responsiveness status, and currently, I have to send a request to health check each one of the inner services from outside.
This Nginx proxy runs on a Windows VM and the inner services can be in the same VM, on a container, other subnets, and so on...
I'm trying to configure one route on Nginx proxy to respond with a single body containing the health response of all inner services.
The desired scenario would be like this:
I've read about the njs module that could do the trick, but it looks like runs only on Linux and MacOS.
Have you guys any idea on how can I achieve this?

Azure Traffic Manager - Disabled Endpoint still accessible?

I have configured the Azure Traffic manager with two endpoints and could access the traffic manager. I thought of validating the scenario where endpoints are disabled, so I have disabled both the endpoints
to my surprise, still the traffic manager url is accessible for about ~2 mins. Is this expected?
It's expected.
When you enable or disable the endpoint status, it controls the availability of the endpoint in the Traffic Manager profile. The underlying service, which might still be healthy, is unaffected. When an endpoint status is disabled, Traffic Manager does not check its health, and the endpoint is not included in a DNS response. Read https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-monitoring#endpoint-and-profile-status
Also this note:
Disabling an endpoint has nothing to do with its deployment state in
Azure. A healthy endpoint remains up and able to receive traffic even
when disabled in Traffic Manager. Additionally, disabling an endpoint
in one profile does not affect its status in another profile.
Read https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-manage-endpoints

Intercept and forward DynamoDB traffic using aws-sdk-go

I have an use case where I have services which require interaction with DynamoDB (Programming env is in golang). But assume these service doesn't have AWS credentials and I have custom AuthN/AuthZ mechanism to validate the services internally and set credentials. So, I want to write a AuthN proxy service which intercepts requests to DynamoDB, check what type of operation (Get/Set/Delete), validate them, set DDB credentials to that request, query dynamodb and send response back to clients. I tried using proxy as mentioned here in DDB documentation, but it is HTTP Connect tunnelling and I couldn't intercept traffic in between as it is HTTPS traffic to DynamoDB. Can someone tell me how I can achieve this using AWS Go sdk library?
Thanks in advance.

Consul & Envoy Integration

Background
I came from HAproxy background and recently there is a lot of hype around "Service Mesh" Architecture. Long story short, I began to learn "Envoy" and "Consul".
I develop an understanding that Envoy is a proxy software but using sidecar to abstract in-out network with "xDS" as Data Plane for the source of truth (Cluster, Route, Filter, etc). Consul is Service Discovery, Segmentation, etc. It also abstracts network and has Data Plane but Consul can't do complex Load Balancing, filter routing as Envoy does.
As Standalone, I can understand how they work and set up them since documentation relatively good. But it can quickly became a headache if I want to integrate Envoy and Consul, since documentation for both Envoy & Consul lacks specific for integration, use-cases, and best practice.
Schematic
Consider the following simple infrastructure design:
Legends:
CS: Consul Server
CA: Consul Agent
MA: Microservice A
MB: Microservice B
MC: Microservice C
EF: Envoy Front Facing / Edge Proxy
Questions
Following are my questions:
In the event of Multi-Instance Microservices, Consul (as
stand-alone) will randomize round-robin. With Envoy & Consul
Integration, How consul handle multi-instance microservice? which
software does the load balance?
Consul has Consul Server to store its data, however, it seems Envoy
does not have "Envoy Server" to store its data, so where are its
data being stored and distributed across multiple instances?
What about Envoy Cluster (Logical Group of Envoy Front Facing Proxy
and NOT Cluster of Services)? How the leader elected?
As I mentioned above, Separately, Consul and Envoy have their
sidecar/agent on each Machine. I read that when integrated, Consul
injects Envoy Sidecar, but no further information on how this works?
If Envoy uses Consul Server as "xDS", what if for example I want to
add an advanced filter so that for certain URL segment it must
forward to a certain instance?
If Envoy uses Consul Server as "xDS", what if I have another machine
and services (for some reason) not managed by Consul Server. How I
configure Envoy to add filter, cluster, etc for that machine and
services?
Thank You, I'm so excited I hope this thread can be helpful to others too.
Apologies for the late reply. I figure its better late than never. :-)
If you are only using Consul for service discovery, and directly querying it via DNS then Consul will randomize the IP addresses returned to the client. If you're querying the HTTP interface, it is up to the client to implement a load balancing strategy based on the hosts returned in the response. When you're using Consul service mesh, the load balancing function will be entirely handled by Envoy.
Consul is an xDS server. The data is stored within Consul and distributed to the agents within the cluster. See the Connect Architecture docs for more information.
Envoy clusters are similar to backend server pools. Proxies contain Clusters for each upstream service. Within each cluster, there are Endpoints which represent the individual proxy instances for the upstream services.
Consul can inject the Envoy sidecar when it is deployed on Kubernetes. It does this through a Kubernetes mutating admission webhook. See Connect Sidecar on Kubernetes: Installation and Configuration for more information.
Consul supports advanced layer 7 routing features. You can configure a service-router to route requests to different destinations by URL paths, headers, query params, etc.
Consul has an upcoming feature in version 1.8 called Terminating Gateways which may enable this use case. See the GitHub issue "Connect: Terminating (External Service) Gateways" (hashicorp/consul#6357) for more information.

How to setup a websocket in Google Kubernetes Engine

How do I enable a port on Google Kubernetes Engine to accept websocket connections? Is there a way of doing so other than using an ingress controller?
Web sockets are supported by Google's global load balancer, so you can use a k8s Service of type LoadBalancer to expose such a service beyond your cluster.
Do be aware that load balancers created and managed outside Kubernetes in this way will have a default connection duration of 30 seconds, which interferes with web socket operation and will cause the connection to be closed frequently. This is almost useless for web sockets to be used effectively.
Until this issue is resolved, you will either need to modify this timeout parameter manually, or (recommended) consider using an in-cluster ingress controller (e.g. nginx) which affords you more control.
As per this article in the GCP documentation, there are 4 ways that you may expose a Service to external applications.
It can be exposed with a ClusterIP, a NodePort, a (TCP/UDP) Load Balancer, or an External Name.

Resources