I want to use Kong as api gateway to allow external applications to interact with the cluster Dapr communicate with my application. I can't find any example.
So, there is no easy way to do this directly. There is a blog post that walks through setting it up with ingress here https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/
The gist of it is that you will set up your ingress controller pods as Dapr services and rewrite/redirect the calls to the dapr sidecar. Be aware of namespaces (the blog glazes over this and installs the ingress in the default namespace which is not common practice) and fully qualify the service name ..
Finally, I recommend you apply a rewrite to the invocation of the downstream service. use a regex to get the segments and append the segment at the end of the service invocation URL: HTTP://localhost:3500/v1.0/invoke/YOURSERVICE.ITSNAMESPACE/method/$2 (where $2 is the segment captured from the original path in the ingress
NOTE: I am having issues getting these types of calls to go through the HTTP pipeline components I have downstream, but if you don't need those, then its a great option
Related
When doing microservices it's quite a common and good practice to have a top layer called API-Gateway, that sits in front of a group of microservices, to facilitate requests and delivery of data and services.
When using MicroFrontends to compose an UI app from multiple sub-apps (MFE), I cannot really see a benefit about still using an API gateway to communicate between each MFE and its corresponding Microservice.
In many schemas, an API-Gateway is still used in front of the microservices
Example here :
But the fact that each microfrontend is its own app also probably means it will communicate with its own API. In this case I would either use a thin API layer over each microservice, and then make each MFE communicate directly with this layer, because a MFE probably won't ever need to read data from another MFE (this would go against the spirit of MFE, or am I wrong ?).
What kind of approach would you use in your projects ?
This is probably a bit opinion-based question, but I will try to be technical to still be relevant.
Consider having several microservices: a, b, c.
To make this available on frontend, these could be made available as:
https://host/services/a
https://host/services/b
https://host/services/c
However, the fact that the endpoints are split between differents services are kind of irrelevant for frontend and basically if we can guarantee the endpoints don't clash, it would be great to have these available directly:
a/endpoint1 -> https://host/services/endpoint1
a/endpoint2 -> https://host/services/endpoint2
b/endpoint3 -> https://host/services/endpoint3
c/endpoint4 -> https://host/services/endpoint4
To implement such mapping, one needs to list all endpoint or at least write some matching pattern within the proxy service. This is very nice for the Frontend team to work with, however it is unfortunately very easy to brake.
What are the best practices for mapping the urls of microservices? Only thing which comes to my mind are some exports of OpenApi, which could be handled by FE to get the right path. However, every service generates its own OpenApi json, so we are basically back to the original problem.
are you sure the Frontend team needs ALL the exposed endpoints? Usually, frontends talk with an API Gateway, or, as cool kids call them these days, "Backend for Frontends".
In a nutshell, it's a special service that takes care of exposing only the functionalities/endpoints needed by the frontend. It will forward calls to the relevant services or, if necessary, call multiple services and aggregate the results.
In most cases these API Gateway don't have a db, as they're retrieving all the data from other services. They might however make use of a caching layer to speedup things.
You can even have multiple API Gateway, one per Frontend (eg. desktop, mobile).
I am learning AWS lambda and have a basic question regarding architecture with respect to managing https calls from multiple lambda functions to a single external service.
The external service will only process 3 requests per second from any IP address. Since I have multiple asynchronous lambdas I cannot be sure I will be below this threshold. I also don't know what IPs my lambdas use or even if they are the same or not.
How should this be managed?
I was thinking of using an SQS FIFO queue, but I would need to setup a bidirectional system to get the call responses back to the appropriate lambda. I think there must be a simple solution to this, but I'm just not familiar enough yet.
What would you experts suggest?
If I am understanding your question correctly then
You can create and API Endpoint by build an API Gateway with Lambda integrations(preferred Lambda proxy integration) and then use throttling option to decide the throughput this can be done in different ways aws docs account level, method level etc.
You can perform some load testing using gatling or any other tool and then generate a report for eg. which can show that even if you have say 6tps on your site you can throttle at method level and see that the external service is hit only at say 3tps.
It would depend upon your architecture how do you want to throttle I had done method level to protect the external service at 8tps.
We are splitting a monolith application into microservices. This will be a gradual process, it means initially we will start with 2 microservices, later we will split them into more and so on.
The monoligh exposes a REST API which provides methods for managing tens of different entities (e.g. users, user_types, roles, role_types, etc.). There is only one consumer of the REST API exposed by the monolith - a Javascript frontend app.
We are currently investigating two possibilities how to configure the API gateway (Zuul):
URLs will contain the microservice name, e.g. /api/dictionary will serve /api/dictionary/user_types and /api/dictionary/role_types, while /api/data will serve /api/data/users and /api/data/roles. It means the URLs will change over time as we create more microservices. Everytime we do it the consumer (frontend) will have to be changed.
URLs will be based on the entity names, e.g. /api/users, /api/user_types, /api/roles and /api/role_types. The disadvantage is that the Zuul configuration will have to contain an explicit configuration for every single entity managed by the system.
Which of the above approaches is correct?
Manmay saying is correct. You should go with first approach for long term gain.
If you still want alternative, then you can combine both of these approach by configuring your API gateway in such a way that, It will route your request
/api/users -> /api/data/users
/api/user_types -> /api/dictionary/user_types
/api/roles -> /api/data/roles
/api/role_types -> /api/dictionary/role_types
By this approach, you will not have to compromise any of the concerns like maintenance or client side changes.
I am working on a new project that will be based on microservices. It's an internal app and only about 10 microservices. We will be using a gateway API for authentication and possibly some microservice aggregation. (Probably Netflix zuul with Spring Boot)
What I'm not clear on is how we do the routing for A/B testing and Canary testing. Lets assume I have 100 clients and we want to A/B test a new version of a microservice. The client app needs no changes, it's just internal changes to the function that the microservice provides.
I understand we would stand up a new microservice which is (say) v2. What I'm puzzled on is how do I direct (say) clients 1-10 to the new version. We need to be able to configure this centrally and not change anything on the client.
We know their mac addresses (as well as other identifying attributes) and can insert any kind of header we want to identify their messages.
So how would I direct these to v2 of the API for the A/B test or Canary deployment?
If describe the high level, generic approach, you may do something like this:
Your clients need to have some parameters which will allow to uniquely identify them. Looks like you already have this.
Implement additional API service (let's call it Experiment API). This service should have at least one endpoint that receives client identifying attributes and says whether the client is involved in A/B testing or not.
On each incoming request, the Gateway API need to use that Experiment API endpoint to decide which microservice version (v1 or v2) uses for redirect/call.
To avoid calling Experiment API each time you may introduce some caching layer in the Gateway API. As another option, you may use some custom cookie (that contains whether client under "experiment"), do call to Experiment API only if that cookie is not specified and return the cookie to client with the response.
I have published a prototype on Github that shows how you could achieve the routing using a Zuul Gateway. This prototype just shows how you can route traffic based on a cookie to different instances of the same application. You can do the routing based on any other criteria.
You should also have a look at Spring Cloud Gateway as an alternative to Zuul. Seems to be very promising.
https://github.com/adiesner/spring-boot-sample-ci-gateway
A more simple setup would be to just add nginx in front of your service and use the split_clients method.
http {
# ...
# application version 1a
upstream version_1a {
server 10.0.0.100:3001;
server 10.0.0.101:3001;
}
# application version 1b
upstream version_1b {
server 10.0.0.104:6002;
server 10.0.0.105:6002;
}
split_clients "${arg_token}" $appversion {
95% version_1a;
* version_1b;
}
server {
# ...
listen 80;
location / {
proxy_set_header Host $host;
proxy_pass http://$appversion;
}
}
}
https://www.nginx.com/blog/performing-a-b-testing-nginx-plus/
To expound a bit on #Set's answer. You'll need to introduce some instrumentation code into your gateway API to make the decision about what downstream endpoint to call. If, and only if, the only component of your distributed backend that is concerned with this is the gateway API, the above solution is over-engineered: you can get by with just a library. But it's likely that you will soon discover that one or more of your other services needs to know about the experiment, in which case you DO need a standalone service.
Generally speaking, building a robust experimentation framework is a hard task though. You will quickly run into unexpected problems, e.g. experience stability (how to guarantee the same experience to return visitors) or how to change the allocation proportion (or perhaps completely turn off the new code) without the need to restart the host application. You ought to investigate the open source frameworks out there, or even commercial server side instrumentation. (We have one at Variant).