Difference between zones and locations endpoints in GKE clusters API - google-api

When I began working with the GKE API, I was surprised to find two very similar endpoints:
https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.zones.clusters
https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters
A quick look at the difference between them reveals that some methods and fields are rearranged, some are deprecated, but no documentation that explains the difference between these two.
I initially thought that the first is for creating zonal, and that latter for creating regional clusters, yet when creating a cluster via Google Cloud Console there's an "Equivalent REST" option, which shows that both cases use the same projects.zones.clusters endpoint.
So, what's the difference between these endpoints, and in which scenarios one would use projects.locations.clusters?

Related

How to provide mutual TLS (mTLS) with Spring application in Kubernetes?

I have an interesting problem, maybe you could help me out.
There are given two spring applications, called app1 and app2. There is plenty of REST calls are happening to both of the services. I need to implement a security solution where both of them can communicate with each other on REST but it is protected by mutual TLS (mTLS where both app has its own cert for each other)
Implementing it the standard way its not that hard, Spring has solutions for it (with keystores and etc.), but the twist is, I have to create it in a Kubernetes environment.
The two app is not in the same cluster, so app1 is in our cluster but app2 deployed in one of our partner's system.
I am pretty new to k8s and not sure what is the best method to achieve this. Should I store the certs or the keystore(s) as secrets? Use and configure nginx ingress somehow, maybe Istio would be useful? I would really want to find the optimal solution but I don't know the right way.
I would really like if I could configure it outside my app and let k8s take care about it but I am not sure if it is the right thing to do.
Any help would be really appreciated, some guidance to find the right path or some kind of real life examples.
Thank you for your help!
Mikolaj has probably covered everything but still let me add my cent
i don't have much experience working with Istio, however i would also suggest checking out the Linkerd service mesh.
Step 1.
Considering if you are on multi could GKE & EKS or so still it will work.
Multicluster guide details and installation details
Linkerd will use the Trust anchor between the cluster so traffic can flow encrypted and not get open to the public internet.
You have to generate the certificate which will form a common base of trust between clusters.
Each proxy will get copy of the certificate and use it for validation.
The answer to your problem will be more complex as there is no one-size-fits-all solution that turns out to be the best. It all depends on what exactly you want to do and what tools you have for it. suren mentioned it very well in the comment:
if you are still in the stage of PoC, then note that there are couple of ways of achieving what you want. Istio would be a valid way, for example. You could have the other service in a ServiceEntry, enable mTLS and there you go. You don't have to even manage secrets for this specific scenario, as it is automatic. But there are other ways. Even with Istio there are other ways. If you are on any cloud provider, you might have some managed services as well
This is a very good comment and I would also recommend an istio based solution to you. First of all check the official mTLS documentation for istio first. You will also find specific usage examples and sample configuration files there.
You also mentioned in the question that your application will run between two clusters. Take a look at this tutorial, which shows exactly how to solve this situation:
Istio injects an envoy sidecar to every pod and makes sure all the traffic goes through the envoy proxy. Envoy proxies compose the data plane. The control plane manages the Envoy sidecars. In previous versions of Istio, the control plane used to have other components, such as Pilot, Citadel, and Galley. These components got consolidated into a single binary called “istiod”. The control plane also deals with the configurations, certificates, secrets, and health checking.
For more information look also at related problem on stackoverflow and another tutorial.
Take into account that in addition to istio itself, you will be able to use ready-made cloud solutions, for example available at GKE i.e. Configuring TLS and mTLS on the Istio ingress .
Another way might be to use a tool Anthos Service Mesh by example: mTLS.

Mapping microservices on frontend

This is probably a bit opinion-based question, but I will try to be technical to still be relevant.
Consider having several microservices: a, b, c.
To make this available on frontend, these could be made available as:
https://host/services/a
https://host/services/b
https://host/services/c
However, the fact that the endpoints are split between differents services are kind of irrelevant for frontend and basically if we can guarantee the endpoints don't clash, it would be great to have these available directly:
a/endpoint1 -> https://host/services/endpoint1
a/endpoint2 -> https://host/services/endpoint2
b/endpoint3 -> https://host/services/endpoint3
c/endpoint4 -> https://host/services/endpoint4
To implement such mapping, one needs to list all endpoint or at least write some matching pattern within the proxy service. This is very nice for the Frontend team to work with, however it is unfortunately very easy to brake.
What are the best practices for mapping the urls of microservices? Only thing which comes to my mind are some exports of OpenApi, which could be handled by FE to get the right path. However, every service generates its own OpenApi json, so we are basically back to the original problem.
are you sure the Frontend team needs ALL the exposed endpoints? Usually, frontends talk with an API Gateway, or, as cool kids call them these days, "Backend for Frontends".
In a nutshell, it's a special service that takes care of exposing only the functionalities/endpoints needed by the frontend. It will forward calls to the relevant services or, if necessary, call multiple services and aggregate the results.
In most cases these API Gateway don't have a db, as they're retrieving all the data from other services. They might however make use of a caching layer to speedup things.
You can even have multiple API Gateway, one per Frontend (eg. desktop, mobile).

How good is Krakend compared to Kong?

I am stuck in choosing One API gateway from the three API gateways mentioned below:
KrakenD (https://www.krakend.io/)
Kong (https://konghq.com/kong/)
Spring Cloud Gateway (https://cloud.spring.io/spring-cloud-gateway/reference/html/)
My requirements are:
Good performance and must have majority of the API gateway features.
Supports aggregating data from two Different Micro-services API's.
All the three of them, looks good from the feature list and the performance wise.
I am thinking of relaxing the second requirement, as I am not sure, whether that is a good practice or not.
API Gateway is a concept that is used in all kind of products, I really think the industry should start sub-categorizing these products as most of them are completely different from each other.
I'll try to summarize here the main highlights according to your requirements.
Both Kong and KrakenD offer the "majority" of API gateway functionalities. Although the word is fuzzy, at least all of them cover stuff like routing, rate limiting, authorization, and such.
Kong
Kong is basically an Nginx proxy that adds a lot of functionality on top of it using Lua.
When using Kong your endpoints have a 1:1 relationship with your backends. Meaning that you declare an endpoint in Kong that exposes data from one backend, and does the magic in the middle (authorization, limiting, etc). This magic is the essence of Kong and is based on Lua plugins (unfortunately, these are not written in C as Nginx is).
If you want to aggregate data from several backends into one single endpoint, Kong does not fit in your scenario.
Finally, Kong is stateful (it's impressive how they try to sell it the other way around, but this is out of the scope of this question). The configuration lives inside a database, and changes to the configuration are through an API that ends up modifying its internal Postgres or equivalent.
Performance is also inevitably linked to the existence of this database (and Lua), and going multi-region can be a real pain.
Kong functionality can be extended with Lua code.
In summary:
Proxy with cross cutting concerns
Nodes require coordination and synchronization
Mutable configuration
The database is the source of truth
More pieces, more complexity
Multi-region lag
Requires powerful hardware to run
Customizations in Lua
KrakenD
KrakenD is a service written from the ground up using Go, taking advantage of the language features for concurrency, speed, and small footprint. In terms of performance, this is the winning racehorse.
KrakenD's natural positioning is as a Gateway with aggregation. It's meant to connect lots of backend services to a single endpoint. It's mostly adopted by companies for feeding Mobile applications, Webapps and other clients. It implements the pattern Backend for Frontend, allowing you to define exactly and with a declarative configuration how is the API that you want to expose to the clients. You can choose which fields are taken from responses, aggregate them, validate them, transform them, etc.
KrakenD is stateless, you version your API the same way you do with the rest of the code, using git. And you deploy it in the same way you do with your application (e.g: a CI/CD pipeline that pushes a new container with the new configuration and is rolled out). As everything is on the config, there is no need to have a central database, neither nodes need communication with each other.
As per the customizations, with KrakenD you can create middlewares, plugins or just scripting in several languages: Go, Lua, Common Expression Language (CEL) -sort of JS- and Martian DSL.
In summary:
On the-fly API creation using upstream services, with cross-cutting concerns (api gateway).
Not a proxy, although it can be used as one.
No node coordination
No synchronization needed
Zero complexity (docker container with a configuration file)
No challenges for Multi-region
Declarative configuration
Immutable infrastructure
Runs on micro and small machines in production without issues.
Customizations in Go, Lua, CEL, and Martian DSL
Spring Cloud Gateway
(As well as Zuul) is used mostly by Java developers that want to stick in the JVM space. I am less familiar with this one, but it's design is also for proxying to existing services, adds also the cross-concerns of the API gateway.
I see it more as a framework that you use to deliver your API. With this product you need to code the transformations yourself in Java. The included gateway functionalitites are declarative as well.
--
I am hoping this sheds some light
My only and major blocker from using Kong for me is you can only extend Kong with LUA. Only small percentages of developers in the world familiar with LUA. That's why I choose KrakenD.

Splitting monolith into microservices

I have an existing web service that supports ordering and it has multiple operations (approximately 20). This is a single webservice that support the ordering function. It interacts with multiple other services to provide ordering capability.
Since there is a lot of business functionality within this app and it is supported by a 10 member team , I believe it is a monolith (though I assume there is no hard and fast rule to define what a monolith is).
We are planning to get the application deployed in cloud foundry environment and we are planning to split the app into 2-3 microservices , primarily to enable them scale independently.
The first few apis which enable searching for a product typically have more number of hits whereas the api that support actual order submission receives less that 5% of the hits. So the product search api should have significantly larger number of instances as compared to order submission api.
Though I am not sure if we could split is based on sub-domains (which I have read should be the basis) , we are thinking of splitting them based on the call sequence as explained earlier.
I have also read that microservices should be choreographed and not orchestrated. However in order to ensure our existing consumers are not impacted , I believe we should expose a api layer which would orchestrate the calls to these microservices. Is providing an api gateway , the normal approach that is followed to ensure consumers do not end up calling multiple microservices and also provides a layer of abstraction?
This seems to be orchestration more than choreography - though I am not hung up on the theoretical aspects , I would like to understand the different solutions that are pursued for this problem statement in an enterprise world.
The Benefits of Microservices
Deploy & Scale Independently
Easier to 'Reason About'
Separation of Concerns
Single Responsibility
(Micro)Service-Oriented Architecture
I would suggest splitting your services based on domain. This is a logical and efficient approach which makes it an easy starting point. Your monolithic package structure may already be organized in this manner, which simplifies the refactoring even more.
API Gateway
The typical Spring Cloud approach for this would be to use a Zuul Proxy on the edge of your network which receives the requests from your clients (web, mobile, etc.) and routes them to the microservices located behind your firewall. The client only interfaces with a single domain, and it handles CORS out of the box.
Resources:
API Gateway Pattern
Routing and Filtering

Why are custom fields not supported in services in Consul?

I've found several circumstances where storing addition metadata regarding a specific service would be convenient however custom fields don't seem to be supported in the services API (Only the basic id, name, address, port). For example, a database name or a load balancer weighting.
I'm curious as to the design decision - is there a best practice this evangelizes or perhaps this is a future enhancement that could be made?
I understand that we one could use the KV store for extra info but it seems more convenient to bundle like-information together and not make
multiple Consul lookups.
Metadata should go into the KV store. There are use cases as you describe. However, Consul is designed for the 95% of most common use cases (actual words of Armon Dadger, a Consul principle enginner). Arbitrary metadata lives just fine in the KV store.

Resources