Multiple POST backends in KrakenD API Gateway - api-gateway

Can I have a KrakenD API Gateway POST endpoint which connects to multiple POST backends?

No, you can't. Multiple backends are just allowed for "safe" methods. Check this comment for more details: https://github.com/luraproject/lura/issues/194#issuecomment-460470842
But you can create a custom client plugin and manage the transaction by yourself

Related

Websocket API authenticated with mTLS in AWS Api Gateway

Does anyone know if it is possible to have a Websocket Api Gateway support mTLS authentication? According to the documentation it is not supported (https://aws.amazon.com/blogs/compute/evaluating-access-control-methods-to-secure-amazon-api-gateway-apis/). Does anyone know if there is a feature request for this or there is a workaround? We've debated using a lambda authorizer on the $connect route on the websocket API to authenticate client certificates manually, but it seems like an artificial implementation for mTLS since we're performing the client authentication after the TLS handshake. Not sure if this is our best option or if there is something better. Thanks.

how to integrate grpc service with graphql?

I have a grpc service that contains several apis(getName, getInfo, etc), and a grpc endpoint, something like this,
configuration-dev-grpc.kmc-default.us-west-2**.com:443
I create a graphql project, how can I connect the graplql with grpc service through that endpoint, or I need to do it in another way?
gRPC and GraphQL are often considered alternatives but, if we consider gRPC as just procedure calls, there's no reason why a GraphQL server could not be implemented against a gRPC client to serve GraphQL clients.
At least one group has a solution:
https://github.com/ysugimoto/grpc-graphql-gateway
If you control the gRPC server, it would possibly be preferable to implement the GraphQL server alongside it, i.e. directly against whatever API it provides. Doing this would avoid the networking between gRPC client and server and the Protobuf (un)marshaling.

How does the client part communicate with a service with RabbitMQ?

I am currently learning the microservice architecture with rabbitMQ to communicate between them. I got the idea to manage the communication between different microservices but I don't really find out how does the client can manage to communicate with my microservices.
For example if I create a user via my web app, do I have to send the request to the exchange directly that will send it to the account service (how can I send it to my exchange?) or do I need a sort of API Gateway to get all my request and then transfer to the exchange?
Thanks in advance,
Yes, you need a gateway of some sort. More info here: https://microservices.io/

Add zuul rate limit per user where there is no centralized authorization module

I have a micro service architecture which is working with Spring Zuul Gateway like below image.
My authentication service returns x-auth-token which is generated by spring authentication resolver and my token repository is redis. So users should use this service to authenticate and then use other services.
All my other services connect to same redis instance, so when they receive x-auth-token they can get user session details. I normally do the authorization by using #PreAuthorize annotation and then specifying the roles that can have access to controller or method.
Everything was so far working fine. Then I have been asked to add rate-limit functionality to this architecture. So for example a single user should not be able to make more than 1 POST request to specific api in books service. Also, if there were two book service instances, I would want to both be counted as single service when its about rate limiting.
I found tons of documents that referred me to this project called spring-cloud-zuul-ratelimit. Looking at the document I realized it does support redis as storage (good for me because I already have redis there) and it also supports handling rate limits per users.
The problem is that my zuul gateway knows nothing about the users! It has no access to redis storage. If I give it an access to redis, the problem might be solved but another one would rise: I'm gonna need to authorize user twice which takes more time and more redis traffic! once at gateway, once at each service (to check the roles and session details).
I'm looking for solutions that are most close to this list of needs:
Does not change my authentication method (I cant just switch to JWT or OAuth)
Does not duplicate authorization or redis queries
Balancing the requests between my services should not effect the rate limit. If each instance of service X is requested once for single user, then user has sent two requests.
Hopefully there is a good spring support for the answer.
I would prefer to be able to change the limits dynamically.
Zuul gateway rate limiter plugin basically tracks counter of user request based
on specific key (could be user's IP, some ID, request path or custom
combinations using custom key generator) given user requests during time interval. You can add it to the
existing zuul gateway application.
Let's say the ratelimiter-gateway is using "[clientIP][userID][method][path]" as request counter key stored in redis, e.g. "10.8.14.58:some#mail.com:POST:/books".
Here's some options I can think of:
If the client send some ID, you can use it directly as rate limiter
combination key.
If the user only send JWT token, you can verify it's claim to get the user
ID, assuming it's embedded in the token, using the same secret key to generate the JWT token in authn service
as Zuul gateway app properties (using OS env credentials, vault etc.). Or
you can just use the token as user ID.
Move the authorization logic to Spring zuul+ratelimiter service. It will
validate incoming request to author & books service, get the user ID from token. And then pass it as
another header, ex: "x-app-user-id", to the upstream services using
SpringBoot Filter. This way, the upstream services won't do any authn logic, it's just read the user id from
header. Communication between author & books service might be using the same header. This, of course, assuming the upstream servers won't be accessed
directly from the outside network.
It might also be good idea to use different redis instance as the ratelimit key storage.
As for dynamic config, based on it's documentation, you can adjust the rate limit config via
properties. I don't know if it can be adjusted dynamically at runtime via
Spring Cloud Config or other remote config implementations without the gateway
app needs to be restarted.

Intercept and forward DynamoDB traffic using aws-sdk-go

I have an use case where I have services which require interaction with DynamoDB (Programming env is in golang). But assume these service doesn't have AWS credentials and I have custom AuthN/AuthZ mechanism to validate the services internally and set credentials. So, I want to write a AuthN proxy service which intercepts requests to DynamoDB, check what type of operation (Get/Set/Delete), validate them, set DDB credentials to that request, query dynamodb and send response back to clients. I tried using proxy as mentioned here in DDB documentation, but it is HTTP Connect tunnelling and I couldn't intercept traffic in between as it is HTTPS traffic to DynamoDB. Can someone tell me how I can achieve this using AWS Go sdk library?
Thanks in advance.

Resources