Using serverless for synchronous RPC microservices - aws-lambda

Let's say I have an API Gateway for third parties to create orders in my system. As a part of order creation I need to validate that the request model I have been provided is correct - not just statically but by checking foreign keys are valid - that the product id’s are valid in the order, the account id is valid. If not I want to return a 400 to let the caller know they have passed an erroneous request.
What I would expect to do is to create an orders::createOrder lambda function, which would make parallel calls to products::listProducts, accounts::listAccountsForCustomer and other microservices to retrieve the information needed for validation, before I am happy to create the order in the system. This validation needs to happen synchronously as it’s a request/response from a third party to create the order.
I would usually want the logical domains - customers, products, orders, accounts to be separate microservices, and I usually have some logic in an API Gateway layer for orchestration / mapping to the microservices below. I've been reading that calling Lambda from Lambda is a bad idea..
How do I correctly model this on serverless?

For your case it'll be best to keep all this logic within one lambda. Splitting it into multiple smaller functions will add latency, so you have worse user experience and you'll multiply your cost since you have multiple functions running. You can also try Step Functions if you want such split. But it's also pretty expensive, I don't recommend it for such simple case.

Related

How to use Event-Driven architecture to remove "api-based lambda calling another lambda" anti-pattern?

Suppose, I have an api POST /order which invokes PlaceOrder lambda and expects response from this. PlaceOrder lambda does some works, invokes another lambda ProcessPayment lambda and expects response. Also, ProcessPayment invokes CreateInvoice lambda expecting response. Whole architecture is like a RequestResponse cycle. I woulde like to achieve that without lambda invoking another lambda as it is considered as anti-pattern. My question is what is the best design pattern to achieve this behavior within 29 seconds with event-driven architecture.
What AWS suggests: As per this official documentation, they suggests to use SQS. But regarding using SQS, I have some thoughts.
My thoughts:
At event sources architecture, I can orchestrate these lambdas with SQS, SNS etc other event sources, but in that case, the nature would not be synchronous and thus I would not get response from the api.
My other solution:
Using Step Function: I can orchestrate this workflow with step function, and I think it is more elegant solution in this synchronous calling case. But I would like to achieve
this via event sources.
How can I design this scenerio with best practices using event-based achitecture?
In an Event-Driven Architecture, the communication between producers and consumers is asynchronous by design, that's the way the architecture scales.
You can get nearly synchronous communication between 2 services in an EDA, by creating dedicated queues / channels to communicate between them, make sure they're scaled up to a level where the latency is acceptable (close to synchronous values).
This adds some complexity, because the services which need responses, have to wait in a hot-loop to get them as soon as possible, and also if messages are lost, you need to have retry policies, etc.
I think you need to focus more on the mechanics of your program and a bit less on design patterns. You need to use the design patterns that fit your use-case, the other way around will not work. In the end, you build a program to fulfill a certain task or set of tasks, so that should be your end goal.
You’re stating that you have a process order Lambda, a create invoice Lambda and a process payment Lambda. I’d say the most interesting question is what you need to get done before you return a response to the user. Maybe you can process the order, respond to the user that it is done and handle the invoicing and payments on a later moment. Typically that would mean you put a message in a SQS queue or on an SNS topic.
It could be that you need your payment to be processed before you respond to the user, because they need to be informed about the status of the payment. You could then combine both actions in a single Lambda, because there is no way to spit the two tasks from one another. Keep in mind that often another option exist where you process the order first, put a message in a queue for the process payment (as it typically is a process that involves a third party) and the front end will poll for an update on the payment status. This way you can return a response quickly and still give an update on the payment as soon as possible.
The create invoice process is typically something you would never want to synchronously invoke during order confirmation. What if your invoicing application (intern or extern) is down? Theoretically you could still process orders as long as you create the invoice at some later moment in time. If you couple everything together you make order confirmation dependent on your invoice creation process, which I would regard as an unnecessary dependency.
I would really advice against step functions for this use-case. They can be utilized for long running processes that need to keep state and ‘wake up’ at specific moments, but for this specific flow I would say they do not help and are unnecessarily complex. If you have 3 things you need to do that you cannot separate from
one another, just run them in the same Lambda.

Attribute Based Access Control (ABAC) in a microservices architecture for lists of resources

I am investigating options to build a system to provide "Entity Access Control" across a microservices based architecture to restrict access to certain data based on the requesting user. A full Role Based Access Control (RBAC) system has already been implemented to restrict certain actions (based on API endpoints), however nothing has been implemented to restrict those actions against one data entity over another. Hence a desire for an Attribute Based Access Control (ABAC) system.
Given the requirements of the system to be fit-for-purpose and my own priorities to follow best practices for implementations of security logic to remain in a single location I devised to creation of an externalised "Entity Access Control" API.
The end result of my design was something similar to the following image I have seen floating around (I think from axiomatics.com)
The problem is that the whole thing falls over the moment you start talking about an API that responds with a list of results.
Eg. A /api/customers endpoint on a Customers API that takes in parameters such as a query filter, sort, order, and limit/offset values to facilitate pagination, and returns a list of customers to a front end. How do you then also provide ABAC on each of these entities in a microservices landscape?
Terrible solutions to the above problem tested so far:
Get the first page of results, send all of those to the EAC API, get the responses, drop the ones that are rejected from the response, get more customers from the DB, check those... and repeat until either you get a page of results or run out of customers in the DB. Tested that for 14,000 records (which is absolutely within reason in my situation) would take 30 seconds to get an API response for someone who had zero permission to view any customers.
On every request to the all customers endpoint, a request would be sent to the EAC API for every customer available to the original requesting user. Tested that for 14,000 records the response payload would be over half a megabyte for someone who had permission to view all customers. I could split it into multiple requests, but then you are just balancing payload size with request spam and the performance penalty doesn't go anywhere.
Give up on the ability to view multiple records in a list. This totally breaks the APIs use for customer needs.
Store all the data and logic required to perform the ABAC controls in each API. This is fraught with danger and basically guaranteed to fail in a way that is beyond my risk appetite considering the domain I am working within.
Note: I tested with 14,000 records just because its a benchmark of our current state of data. It is entirely feasible that a single API could serve 100,000 or 1m records, so anything that involves iterating over the whole data set or transferring the whole data set over the wire is entirely unsustainable.
So, here lies the question... How do you implement an externalised ABAC system in a microservices architecture (as per the diagram) whilst also being able to service requests that respond with multiple entities with a query filter, sort, order, and limit/offset values to facilitate pagination.
After dozens of hours of research, it was decided that this is an entirely unsolvable problem and is simply a side effect of microservices (and more importantly, segregated entity storage).
If you want the benefits of a maintainable (as in single piece of externalised infrastructure) entity level attribute access control system, a monolithic approach to entity storage is required. You cannot simultaneously reap the benefits of microservices.

Am I misusing GraphQL if I must decompose REST data, then re-aggregate it?

We are considering using GraphQL on top of a REST service (using the
FHIR standard for medical records).
I understand that the pattern with GraphQL is to aggregate the results
of multiple, independent resolvers into the final result. But a
FHIR-compliant REST server offers batch endpoints that already aggregate
data. Sometimes we’ll need à la carte data—a patient’s age or address
only, for example. But quite often, we’ll need most or all of the data
available about a particular patient.
So although we can get that kind of plenary data from a single REST call
that knits together multiple associations, it seems we will need to
fetch it piecewise to do things the GraphQL way.
An optimization could be to eager load and memoize all the associated
data anytime any resolver asks for any data. In some cases this would be
appropriate while in other cases it would be serious overkill. But
discerning when it would be overkill seems impossible given that
resolvers should be independent. Also, it seems bloody-minded to undo
and then redo something that the REST service is already perfectly
capable of doing efficiently.
So—
Is GraphQL the wrong tool when it sits on top of a REST API that can
efficiently aggregate data?
If GraphQL is the right tool in this situation, is eager-loading and
memoization of associated data appropriate?
If eager-loading and memoization is not the right solution, is there
an alternative way to take advantage of the REST service’s ability
to aggregate data?
My question is different from
this
question and
this
question because neither touches on how to take advantage of another
service’s ability to aggregate data.
An alternative approach would be to parse the request inside the resolver for a particular query. The fourth parameter passed to a resolver is an object containing extensive information about the request, including the selection set. You could then await the batched request to your API endpoint based on the requested fields, and finally return the result of the REST call, and let your lower level resolvers handle parsing it into the shape the data was requested in.
Parsing the info object can be a PITA, although there's libraries out there for that, at least in the Node ecosystem.

CQRS DDD: How to validate products existence before adding them to order?

CQRS states: command should not query read side.
Ok. Let's take following example:
The user needs to create orders with order lines, each order line contains product_id, price, quantity.
It sends requests to the server with order information and the list of order lines.
The server (command handler) should not trust the client and needs to validate if provided products (product_ids) exist (otherwise, there will be a lot of garbage).
Since command handler is not allowed to query read side, it should somehow validate this information on the write side.
What we have on the write side: Repositories. In terms of DDD, repositories operate only with Aggregate Roots, the repository can only GET BY ID, and SAVE.
In this case, the only option is to load all product aggregates, one by one (repository has only GET BY ID method).
Note: Event sourcing is used as a persistence, so it would be problematic and not efficient to load multiple aggregates at once to avoid multiple requests to the repository).
What is the best solution for this case?
P.S.: One solution is to redesign UI (more like task based UI), e.g.: User first creates order (with general info), then adds products one by one (each addition separate http request), but still I need to support bulk operations (api for third party applications as an example).
The short answer: pass a domain service (see Evans, chapter 5) to the aggregate along with the other command arguments.
CQRS states: command should not query read side.
That's not an absolute -- there are trade offs involved when you include a query in your command handler; that doesn't mean that you cannot do it.
In domain-driven-design, we have the concept of a domain service, which is a stateless mechanism by which the aggregate can learn information from data outside of its own consistency boundary.
So you can define a service that validates whether or not a product exists, and pass that service to the aggregate as an argument when you add the item. The work of computing whether the product exists would be abstracted behind the service interface.
But what you need to keep in mind is this: products, presumably, are defined outside of the order aggregate. That means that they can be changing concurrently with your check to verify the product_id. From the point of view of correctness, there's no real difference between checking the validity of the product_id in the aggregate, or in the application's command handler, or in the client code. In all three places, the product state that you are validating against can be stale.
Udi Dahan shared an interest observation years ago
A microsecond difference in timing shouldn’t make a difference to core business behaviors.
If the client has validated the data one hundred milliseconds ago when composing the command, and the data was valid them, what should the behavior of the aggregate be?
Think about a command to add a product that is composed concurrently with an order of that same product - should the correctness of the system, from a business perspective, depend on the order that those two commands happen to arrive?
Another thing to keep in mind is that, by introducing this check into your aggregate, you are coupling the ability to change the aggregate to the availability of the domain service. What is supposed to happen if the domain service can't reach the data it needs (because the read model is down, or whatever). Does it block? throw an exception? make a guess? Does this choice ripple back into the design of the aggregate, and so on.

Validate Command in CQRS that related to other domain

I am learning to develop microservices using DDD, CQRS, and ES. It is HTTP RESTful service. The microservices is about online shop. There are several domains like products, orders, suppliers, customers, and so on. The domains built in separate services. How to do the validation if the command payload relates to other domains?
For example, here is the addOrderItemCommand payload in the order service (command-side).
{
"customerId": "CUST111",
"productId": "SKU222",
"orderId":"SO333"
}
How to validate the command above? How to know that the customer is really exists in database (query-side customer service) and still active? How to know that the product is exists in database and the status of the product is published? How to know whether the customer eligible to get the promo price from the related product?
Is it ok to call API directly (like point-to-point / ajax / request promise) to validate this payload in order command-side service? But I think, the performance will get worse if the API called directly just for validation. Because, we have developed an event processor outside the command-service that listen from the event and apply the event to the materalized view.
Thank you.
As there are more than one bounded contexts that need to be queried for the validation to pass you need to consider eventual consistency. That being said, there is always a chance that the process as a whole can be in an invalid state for a "small" amount of time. For example, the user could be deactivated after the command is accepted and before the order is shipped. An online shop is a complex system and exceptions could appear in any of its subsystems. However, being implemented as an event-driven system helps; every time the ordering process enters an invalid state you can take compensatory actions/commands. For example, if the user is deactivated in the meantime you can cancel all its standing orders, release the reserved products, announce the potential customers that have those products in the wishlist that they are not available and so on.
There are many kinds of validation in DDD but I follow the general rule that the validation should be done as early as possible but without compromising data consistency. So, in order to be early you could query the readmodel to reject the commands that couldn't possible be valid and in order for the system to be consistent you need to make another check just before the order is shipped.
Now let's talk about your specific questions:
How to know that the customer is really exists in database (query-side customer service) and still active?
You can query the readmodel to verify that the user exists and it is still active. You should do this as a command that comes from an invalid user is a strong indication of some kind of attack and you don't want those kind of commands passing through your system. However, even if a command passes this check, it does not necessarily mean that the order will be shipped as other exceptions could be raised in between.
How to know that the product is exists in database and the status of the product is published?
Again, you can query the readmodel in order to notify the user that the product is not available at the moment. Or, depending on your business, you could allow the command to pass if you know that those products will be available in less than 24 hours based on some previous statistics (for example you know that TV sets arrive daily in your stock). Or you could let the customer choose whether it waits or not. In this case, if the products are not in stock at the final phase of the ordering (the shipping) you notify the customer that the products are not in stock anymore.
How to know whether the customer eligible to get the promo price from the related product?
You will probably have to query another bounded context like Promotions BC to check this. This depends on how promotions are validated/used.
Is it ok to call API directly (like point-to-point / ajax / request promise) to validate this payload in order command-side service? But I think, the performance will get worse if the API called directly just for validation.
This depends on how resilient you want your system to be and how fast you want to reject invalid commands.
Synchronous call are simpler to implement but they lead to a less resilient system (you should be aware of cascade failures and use technics like circuit breaker to stop them).
Asynchronous (i.e. using events) calls are harder to implement but make you system more resilient. In order to have async calls, the ordering system can subscribe to other systems for events and maintain a private state that can be queried for validation purposes as the commands arrive. In this way, the ordering system continues to work even of the link to inventory or customer management systems are down.
In any case, it really depends on your business and none of us can tell you exaclty what to do.
As always everything depends on the specifics of the domain but as a general principle cross domain validation should be done via the read model.
In this case, I would maintain a read model within each microservice for use in validation. Of course, that brings with it the question of eventual consistency.
How you handle that should come from your understanding of the domain. Factors such as the length of the eventual consistency compared to the frequency of updates should be considered. The cost of getting it wrong for the business compared to the cost of development to minimise the problem. In many cases, just recording the fact there has been a problem is more than adequate for the business.
I have a blog post dedicated to validation which you can find here: How To Validate Commands in a CQRS Application

Resources