istio traffic management between multi cluster - go

I have several Kubernetes clusters. Due to the company's security issues, only A 'service in Cluster A should be allowed to access B' Service in Cluster B. Can you handle such a case with istio?
Although it is possible to control the traffic using the header information in istio's virtualservice, the http header information can be manipulated at any time, which does not satisfy the security issue.

Istio has a different federation with a single control plane or multiple control plane. you can check out below. the communication across network supported by MTLS so you can be assured it can't have tampered.
Shared control plane
https://istio.io/docs/setup/kubernetes/install/multicluster/shared-gateways/
Multiple control planes
https://istio.io/docs/setup/kubernetes/install/multicluster/gateways/
This is pretty new and under heavy development, so you can try them or simply use HTTPS communication when connecting across the network.

Related

Consul & Envoy Integration

Background
I came from HAproxy background and recently there is a lot of hype around "Service Mesh" Architecture. Long story short, I began to learn "Envoy" and "Consul".
I develop an understanding that Envoy is a proxy software but using sidecar to abstract in-out network with "xDS" as Data Plane for the source of truth (Cluster, Route, Filter, etc). Consul is Service Discovery, Segmentation, etc. It also abstracts network and has Data Plane but Consul can't do complex Load Balancing, filter routing as Envoy does.
As Standalone, I can understand how they work and set up them since documentation relatively good. But it can quickly became a headache if I want to integrate Envoy and Consul, since documentation for both Envoy & Consul lacks specific for integration, use-cases, and best practice.
Schematic
Consider the following simple infrastructure design:
Legends:
CS: Consul Server
CA: Consul Agent
MA: Microservice A
MB: Microservice B
MC: Microservice C
EF: Envoy Front Facing / Edge Proxy
Questions
Following are my questions:
In the event of Multi-Instance Microservices, Consul (as
stand-alone) will randomize round-robin. With Envoy & Consul
Integration, How consul handle multi-instance microservice? which
software does the load balance?
Consul has Consul Server to store its data, however, it seems Envoy
does not have "Envoy Server" to store its data, so where are its
data being stored and distributed across multiple instances?
What about Envoy Cluster (Logical Group of Envoy Front Facing Proxy
and NOT Cluster of Services)? How the leader elected?
As I mentioned above, Separately, Consul and Envoy have their
sidecar/agent on each Machine. I read that when integrated, Consul
injects Envoy Sidecar, but no further information on how this works?
If Envoy uses Consul Server as "xDS", what if for example I want to
add an advanced filter so that for certain URL segment it must
forward to a certain instance?
If Envoy uses Consul Server as "xDS", what if I have another machine
and services (for some reason) not managed by Consul Server. How I
configure Envoy to add filter, cluster, etc for that machine and
services?
Thank You, I'm so excited I hope this thread can be helpful to others too.
Apologies for the late reply. I figure its better late than never. :-)
If you are only using Consul for service discovery, and directly querying it via DNS then Consul will randomize the IP addresses returned to the client. If you're querying the HTTP interface, it is up to the client to implement a load balancing strategy based on the hosts returned in the response. When you're using Consul service mesh, the load balancing function will be entirely handled by Envoy.
Consul is an xDS server. The data is stored within Consul and distributed to the agents within the cluster. See the Connect Architecture docs for more information.
Envoy clusters are similar to backend server pools. Proxies contain Clusters for each upstream service. Within each cluster, there are Endpoints which represent the individual proxy instances for the upstream services.
Consul can inject the Envoy sidecar when it is deployed on Kubernetes. It does this through a Kubernetes mutating admission webhook. See Connect Sidecar on Kubernetes: Installation and Configuration for more information.
Consul supports advanced layer 7 routing features. You can configure a service-router to route requests to different destinations by URL paths, headers, query params, etc.
Consul has an upcoming feature in version 1.8 called Terminating Gateways which may enable this use case. See the GitHub issue "Connect: Terminating (External Service) Gateways" (hashicorp/consul#6357) for more information.

Communicate to stateless web Api service from a different application in Azure Service Fabric

I have two different service fabric applications. Both are stateless web api models. I do have a situation that from service 1 inside application 1, I need to invoke service 2 which is part of application 2. I am deploying both applications in the same cluster. Can someone advise the best practice here. What could be best way to communicate. Please provide some sample as well.
Fabric Transport (aka Service Remoting) is the sdk built-in communication model. Compared to communication over HTTP or WCF it does a little more, especially on the client side of the communication.
When it comes to communicating with Service Fabric services (or really, any distributed systems service) your communication should take into account that the connection could be fail to established on an initial try, or be interrupted mid communication and that you really shouldn't build your solution to expect it to always work flawlessly. The reason for this is in the nature of how Service Fabric at any time can decide to move primaries from a node to another node, the nodes themselves can go down and the services can crash. Nothing strange about he great thing with Service Fabric is that it does a lot of the heavy lifting for you when it comes to maintaining your services and nodes over time.
So, in terms of communication this means that a client needs to be able to do three things (for it to truly work in a distributed environment);
resolve the address to the service (figure out which node it is on, which port it is listening on, which partition id and replica to target and so on)
connect to the service, package and send requests and then recieve and unpack responses
retry the resolve and connect if the communication fails
Fabric Transport does all this when you are using the Service Remoting clients (like ServiceProxy) and service side listeners.
Thats the good part with Fabric Transport, you get all that out of the box and most of the time you don't have to change the default setup either. The bad part is that it only works for communication inside the cluster, i.e. you cannot communicate from the outside to a service running in the cluster using Fabric Transport. For that you need HTTP or WCF.
HTTP(s) and WCF (over HTTP(s)) communication allow you to build your own clients and handle the communication yourself. There are a number of samples on how you can do the resolve, connect and retry for HTTP clients, this one for instance
According to Microsoft there are three built-in communication options. It's up to you to decide which one works best for you. I'm personally using service remoting which is easy to quickly set up. It also allows you to exception handling in your client service.

Algorithm/Method to save costs of running filtering proxy on AWS EC2

I am trying to set up a Squid Proxy combined with DansGuardian Content filtering engine on EC2. I will be filtering traffic from mobile(IOS/Android) clients via this filtered proxy but that could mean a lot of traffic flowing through my system, since I will have to route all of the traffic through the DNS, which inturn could mean a lot Amazon EC2 costs!. Is there a known method/standard in which I can direct only known blacklisted traffic via this proxy in a cost effective manner?. Things I have explored include creating blacklists on the device and filtering right there , but that might mean I have to keep going back and changing (adding or removing sites) and this is not really feasible anyway.

Connecting to a Grid Cluster With GridGain

I know that out of the box that GridGain connects to the other clients through multicast, but is there a way to configure GridGain to accept connections outside of the local network? Also is there a way to enable encryption for the communication as well?
The Disovery SPI and Communication SPI allows you to plug alternative discovery and communication mechanisms.
For more detail, refer to the comprehensive API documentation (GridGain 3).
This is necessary on Amazon EC2, which doesn't support multicast. Here's an article discussing this setup.
Multicast only works well within a certain network segment (and in some cases this isn't even allowed for security reasons). So if you want to connect nodes to your grid that are outside your local network you have to resort to other transports such as JMS or mail (if performance is an issue you might get it away with unicast/static ip's and JGroups).
I think that encryption is possible with both the JMS and mail transport, depending on your message broker and mail setup.

Detecting dead applications while server is alive in NLB

Windows NLB works great and removes computer from the cluster when the computer is dead.
But what happens if the application dies but the server still works fine? How have you solved this issue?
Thanks
By not using NLB.
Hardware load balancers often have configurable "probe" functions to determine if a server is responding to requests. This can be by accessing the real application port/URL, or some specific "healthcheck" URL that returns only if the application is healthy.
Other options on these look at the queue/time taken to respond to requests
Cisco put it like this:
The Cisco CSM continually monitors server and application availability
using a variety of probes, in-band
health monitoring, return code
checking, and the Dynamic Feedback
Protocol (DFP). When a real server or
gateway failure occurs, the Cisco CSM
redirects traffic to a different
location. Servers are added and
removed without disrupting
service—systems easily are scaled up
or down.
(from here: http://www.cisco.com/en/US/products/hw/modules/ps2706/products_data_sheet09186a00800887f3.html#wp1002630)
Presumably with Windows NLB there is some way to programmatically set the weight of nodes? The nodes should self-monitor and if there is some problem (e.g. a particular node is low on disc space), set its weight to zero so it receives no further traffic.
However, this needs to be carefully engineered and have further human monitoring to ensure that you don't end up with a situation where one fault causes the entire cluster to announce itself down.
You can't really hope to deal with a "byzantine general" situation in network load balancing; an appropriately broken node may think it's fine, appear fine, but while being completely unable to do any actual work. The trick is to try to minimise the possibility of these situations happening in production.
There are multiple levels of health check for a network application.
is the server machine up?
is the application (service) running?
is the service accepting network connections?
does the service respond appropriately to a "are you ok" request?
does the service perform real work? (this will also check back-end systems behind the service your are probing)
My experience with NLB may be incomplete, but I'll describe what I know. NLB can do 1 and 2. With custom coding you can add the other levels with varying difficulty. With some network architectures this can be very difficult.
Most hardware load balancers from vendors like Cisco or F5 can be easily configured to do 3 or 4. Level 5 testing still requires custom coding.
We start in the situation where all nodes are part of the cluster but inactive.
We run a custom service monitor which makes a request on the service locally via the external interface. If the response was successful we start the node (allow it to start handling NLB traffic). If the response failed we stop the node from receiving traffic.
All the intermediate steps described by Darron are irrelevant. Did it work or not is the only thing we care about. If the machine is inaccessible then the rest of the NLB cluster will treat it as failed.

Resources