How to do canary releases and dynamic routing with Netflix Zuul? - spring-boot

We faced with the problem that we need to do such thing as dynamic routing and canary releases. So, for example, we deploy microservice microservice-1. Then, when someone finished a big feature we want to deploy it as a microservice microservice-1.1.
Question
Is it possible to dynamically reroute requests using information, for example, from Headers, and route to the microservice version microservice-1.1 instead on microservice-1?
For example, someone needs this feature and he will modify/add specific Header and for all requests, he will use new microservice-1.1. And if that Header is missing then the current microservice-1 version should be used.
For service discovery, I am using Eureka. Right now I am looking at linkerd but there is no support for Eureka and I am working on it right now. Of course, if it is possible to do it using Zuul that would be great. Please advise where to look at.

Not really sure about Netflix Zuul, but we liked the approach presented by Istio (backed by Google, etc) which works really well with Containers (Kubernetes) and you get the support for canary releases https://istio.io/blog/2017/0.1-canary/

Related

Spring boot block requests for maintenance and return 503 error on each requests

how I can do in order to return 503 server error response code when my api is under maintenance?
Usually microservices tend to have a API gateway where you implement solutions such parsing some header token and inserting new headers , rate limiting , checks like if a user is administrator or not etc
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern
You can use https://spring.io/projects/spring-cloud-gateway which is used build API gateway based on spring stack
I believe you can use this to temporarily shutdown certain routes and also configure different errors
Assuming your Spring Boot web service is down completely for maintenance, then its best not to do it at the Spring Boot level. Perhaps its better to come up with a solution in by which you can swap out the server for something else that returns the 503.
Heres a very simple example:
Your APIs domain is api.myservice.com
You then switch where the domain is pointing to that solution that statically responds with the 503.
You do the maintenance on the server/database/etc.
Once the maintenance is done and your Spring Boot service is up and running, you then switch your domain to point back to the server.
Note: Domains records have a Time To Live so note that the above example is just something to give you an idea. You'll have to take the timing into consideration. An actual solution is hard to recommend when we don't have your environment details or context.
The point I'm trying to make is that perhaps Spring Boot is usually not the place to do this.

How to provide mutual TLS (mTLS) with Spring application in Kubernetes?

I have an interesting problem, maybe you could help me out.
There are given two spring applications, called app1 and app2. There is plenty of REST calls are happening to both of the services. I need to implement a security solution where both of them can communicate with each other on REST but it is protected by mutual TLS (mTLS where both app has its own cert for each other)
Implementing it the standard way its not that hard, Spring has solutions for it (with keystores and etc.), but the twist is, I have to create it in a Kubernetes environment.
The two app is not in the same cluster, so app1 is in our cluster but app2 deployed in one of our partner's system.
I am pretty new to k8s and not sure what is the best method to achieve this. Should I store the certs or the keystore(s) as secrets? Use and configure nginx ingress somehow, maybe Istio would be useful? I would really want to find the optimal solution but I don't know the right way.
I would really like if I could configure it outside my app and let k8s take care about it but I am not sure if it is the right thing to do.
Any help would be really appreciated, some guidance to find the right path or some kind of real life examples.
Thank you for your help!
Mikolaj has probably covered everything but still let me add my cent
i don't have much experience working with Istio, however i would also suggest checking out the Linkerd service mesh.
Step 1.
Considering if you are on multi could GKE & EKS or so still it will work.
Multicluster guide details and installation details
Linkerd will use the Trust anchor between the cluster so traffic can flow encrypted and not get open to the public internet.
You have to generate the certificate which will form a common base of trust between clusters.
Each proxy will get copy of the certificate and use it for validation.
The answer to your problem will be more complex as there is no one-size-fits-all solution that turns out to be the best. It all depends on what exactly you want to do and what tools you have for it. suren mentioned it very well in the comment:
if you are still in the stage of PoC, then note that there are couple of ways of achieving what you want. Istio would be a valid way, for example. You could have the other service in a ServiceEntry, enable mTLS and there you go. You don't have to even manage secrets for this specific scenario, as it is automatic. But there are other ways. Even with Istio there are other ways. If you are on any cloud provider, you might have some managed services as well
This is a very good comment and I would also recommend an istio based solution to you. First of all check the official mTLS documentation for istio first. You will also find specific usage examples and sample configuration files there.
You also mentioned in the question that your application will run between two clusters. Take a look at this tutorial, which shows exactly how to solve this situation:
Istio injects an envoy sidecar to every pod and makes sure all the traffic goes through the envoy proxy. Envoy proxies compose the data plane. The control plane manages the Envoy sidecars. In previous versions of Istio, the control plane used to have other components, such as Pilot, Citadel, and Galley. These components got consolidated into a single binary called “istiod”. The control plane also deals with the configurations, certificates, secrets, and health checking.
For more information look also at related problem on stackoverflow and another tutorial.
Take into account that in addition to istio itself, you will be able to use ready-made cloud solutions, for example available at GKE i.e. Configuring TLS and mTLS on the Istio ingress .
Another way might be to use a tool Anthos Service Mesh by example: mTLS.

Spring Boot user job/process monitoring

using Spring 2.0.3
I have a set of Spring Servers which I need to find out if the Spring is processing a request sent to it. Only one of these requests is processed at a time. In this case the request is, depending on options, can cause a good number of code paths to be used. To support the different variations of the starting call there are about 30 different services and some other classes.
I need to be able to send some request to these servers and ask the question: Are you working on one of these requests. The response can be a simply yes or no.
In trying to come up with an approach it kind of seems like the Spring Actuator might be the way to go. However in a least some of the material I have looked at seems like it is at more of a sysadmin type of level.
My question is how to approach this issue? Is the Actuator the best bet to archive what I am looking for, and if not what to do? If possible would like to avoid placing code in each service/class to see what is going on.
thanks

Using spring-boot-admin for a non spring-boot project

tl;dr
Requesting suggestions, guidelines or examples for possibilities to extend spring-boot-adminto use methods other than HTTP requests for health moitoring of non-spring projects like MariaDB.
Full version
There is a requirement to setup a monitoring application using spring-boot-admin. Several of the clients are spring-bootapplications and are easily implemented. There are however a couple of non spring-boot projects like the database server MariaDB.
The question is therefore formulated thusly : Is it possible to extend SBA to monitor the databse status by methods other than HTTP requests. One possible approach, for example, might be to check if it is possible to connect to the application specific TCP port to verify if the db server is still running. However, other possibilities can be exploited too.
One post I found similar to my question was this :
https://github.com/codecentric/spring-boot-admin/issues/504. The key difference here though is that the provided answer still sugests a HTTP approach. The reference guide also does not suggest an alternative.
Should such a possibility exists, a brief outline of the approach or an example implementation would be most welcome.
SBA currently only supports checking health via http. But your DB should be implicitly monitored if you have an according health indicator on your business application.
It should be possible to extend the StatusUpdater#queryStatus() doing a tcp connect if it encounters an health-url beginning with tcp:// instead of http://...
And in case you accomplish that a PR is appreciated :)

A/B testing. Routing Clients in a gateway API

I am working on a new project that will be based on microservices. It's an internal app and only about 10 microservices. We will be using a gateway API for authentication and possibly some microservice aggregation. (Probably Netflix zuul with Spring Boot)
What I'm not clear on is how we do the routing for A/B testing and Canary testing. Lets assume I have 100 clients and we want to A/B test a new version of a microservice. The client app needs no changes, it's just internal changes to the function that the microservice provides.
I understand we would stand up a new microservice which is (say) v2. What I'm puzzled on is how do I direct (say) clients 1-10 to the new version. We need to be able to configure this centrally and not change anything on the client.
We know their mac addresses (as well as other identifying attributes) and can insert any kind of header we want to identify their messages.
So how would I direct these to v2 of the API for the A/B test or Canary deployment?
If describe the high level, generic approach, you may do something like this:
Your clients need to have some parameters which will allow to uniquely identify them. Looks like you already have this.
Implement additional API service (let's call it Experiment API). This service should have at least one endpoint that receives client identifying attributes and says whether the client is involved in A/B testing or not.
On each incoming request, the Gateway API need to use that Experiment API endpoint to decide which microservice version (v1 or v2) uses for redirect/call.
To avoid calling Experiment API each time you may introduce some caching layer in the Gateway API. As another option, you may use some custom cookie (that contains whether client under "experiment"), do call to Experiment API only if that cookie is not specified and return the cookie to client with the response.
I have published a prototype on Github that shows how you could achieve the routing using a Zuul Gateway. This prototype just shows how you can route traffic based on a cookie to different instances of the same application. You can do the routing based on any other criteria.
You should also have a look at Spring Cloud Gateway as an alternative to Zuul. Seems to be very promising.
https://github.com/adiesner/spring-boot-sample-ci-gateway
A more simple setup would be to just add nginx in front of your service and use the split_clients method.
http {
# ...
# application version 1a
upstream version_1a {
server 10.0.0.100:3001;
server 10.0.0.101:3001;
}
# application version 1b
upstream version_1b {
server 10.0.0.104:6002;
server 10.0.0.105:6002;
}
split_clients "${arg_token}" $appversion {
95% version_1a;
* version_1b;
}
server {
# ...
listen 80;
location / {
proxy_set_header Host $host;
proxy_pass http://$appversion;
}
}
}
https://www.nginx.com/blog/performing-a-b-testing-nginx-plus/
To expound a bit on #Set's answer. You'll need to introduce some instrumentation code into your gateway API to make the decision about what downstream endpoint to call. If, and only if, the only component of your distributed backend that is concerned with this is the gateway API, the above solution is over-engineered: you can get by with just a library. But it's likely that you will soon discover that one or more of your other services needs to know about the experiment, in which case you DO need a standalone service.
Generally speaking, building a robust experimentation framework is a hard task though. You will quickly run into unexpected problems, e.g. experience stability (how to guarantee the same experience to return visitors) or how to change the allocation proportion (or perhaps completely turn off the new code) without the need to restart the host application. You ought to investigate the open source frameworks out there, or even commercial server side instrumentation. (We have one at Variant).

Resources