Authorization of commands in Axon - event-sourcing

Up until now I have been handling authorization in the CommandHandlers.
An example is I have an aggregate "Team" containing a list of managers (AggregateIdentifier from a User). All command handlers in the Team aggregate then verify the user executing the command is manager of the team.
The userId is injected as metadata in a CommandHandlerInterceptor based on the SecurityContext.
My main concern is, when I use sagas, it becomes an additional overhead to maintain the user context across the commands issued against different aggregates. Aside from that, the manager association can expire in the period the saga is running and subsequent failing commands, leading to an incomplete state which also needs to be handled with some rollback functionality.
Is it better to do the authorization in my controller layer to avoid the additional overhead or should I see it more as good practice to let my CommandHandlers decide whether the command is valid for the aggregate?

Authorization to perform certain operations/commands is something which I'd argue isn't domain specific logic. Instead, it is more a form of cross cutting concern which you need throughout your application. Thus, placing it in the #CommandHandler annotated method is not the ideal place in my head. However, placing it close by makes a lot of sense.
You have pointed out you are already using a CommandHandlerInterceptor to populate the Spring SecurityContext, thus I am assuming you are using a CommandDispatchInterceptor to populate the command's MetaData with information when you send a command out. This is a great use of the interceptor logic indeed, so I'd keep that in place. This however set's the information, it doesn't validate it.
To that end, you could build your own Handler Enhancer, which validates security metadata on a command. You could even build a dedicated annotation you'd add next to the #CommandHandler annotation, which describes the required roles. That way, the method still portrays what roles you need for the given command, but the actual validation can be done in this Handler Enhancer for you.
Now, let's circle back to your question:
Is it better to do the authorization in my controller layer to avoid the additional overhead or should I see it more as good practice to let my CommandHandlers decide whether the command is valid for the aggregate?
I think it's fine to do it in the aggregate, potentially making it cleaner through use of a Handler Enhancer. When it comes to your concern in the Saga, well, I think you should see that separate. The Saga handles events, facts that something has happened. Ignoring that fact because somebody whom initiated the operations which led to this fact doesn't have the rights doesn't resolve the point that it still has happened. Added, you are indeed not guaranteed on the timing of the Saga at all. Maybe your Saga deals with historical events, meaning it is completely out of scope.
If possible within your system, I would regard any command the Saga wants to publish as being sent by a "system user". The Saga is not something your users (which have specific roles) will directly influence; it is all indirect. The Saga is internal to your system, hence it is the system describing the intent to perform an operation.
That's my two cents to the situation, hope this helps you out #Vincent!

Related

How to handle dependent behavior in a domain class?

Let's say I've got a domain class, which has functions, that are to be called in a sequence. Each function does its job but if the previous step in the sequence is not done yet, it throws an error. The other way is that each function completes the step required for it to run, and then executes its own logic. I feel that this way is not a good practice, since I am adding multiple responsibilities, and the caller wont know what all operations can happen when he invokes a method.
My question is, how to handle dependent scenarios in DDD. Is it the responsibility of the caller to invoke the methods in the right sequence? Or do we make the methods handle the dependent operations before it's own logic?
Is it the responsibility of the caller to invoke the methods in the right sequence?
It's ok if those methods have a business meaning. For example the client may book a flight, and then book a hotel room. Both of those is something the client understands, and it is the client's logic to call them in this sequence. On the other hand, inserting the reservation into the database, then committing (or whatever) is technical. The client should not have to deal with that at all. Or "initializing" an object, then calling other methods, then calling "close".
Requiring a sequence of technical calls is a form of temporal coupling, it is considered a bad practice, and is not directly related to DDD.
The solution is to model the problem better. There is probably a higher level use-case the caller wants achieved with this call sequence. So instead of publishing the individual "steps" required, just support the higher use-case as a whole.
In general you should always design with the goal to get any sequence of valid calls to actually mean something (as far as the language allows).
Update: A possible model for the mentioned "File" domain:
public interface LocalFile {
RemoteFile upload();
}
public interface RemoteFile {
RemoteFile convert(...);
LocalFile download();
}
From my point of view, what you are describing is the orchestration of domain model operations. That's the job of the application layer, the layer upon domain model. You should have an application service that would call the domain model methods in the right sequence, and it also should take into account whether some step has left any task undone, and in such case, tell the next step to perform it.
TLDR; Scroll to the bottom for the answer, but the backstory will give some good context.
If the caller into your domain must know the order in which to call things, then you have missed an opportunity to encapsulate business logic in your domain, which is a symptom of an anemic domain.
#RobertBräutigam made a very good point:
Requiring a sequence of technical calls is a form of temporal coupling, it is considered a bad practice, and is not directly related to DDD.
This is true, but it is worse when you do it with your domain model because non-domain concerns get intermixed with domain concerns. Intent becomes lost in a sea of non business logic. If you can, you look for a higher-order aggregate that encapsulates the ordering. To borrow Robert's example, rather than booking a flight then a hotel room, and forcing that on the client, you could have a Vacation aggregate take both and validate it.
I know that sounds wrong in your case, and I suspect you're right. There's a clear dependency that can't happen all at once, so we can't be the end of the story. When you have a clear dependency with intermediate transactions that must occur before the "final" state, we have... orchestration (think sagas, distributed transactions, domain events and all that goodness).
What you describe with file operations spans across transactions. The manipulation (state change) of a domain is transactional at each point in a distributed transaction, but is not transactional overall. So when #choquero70 says
you are describing is the orchestration of domain model operations. That's the job of the application layer, the layer upon domain model.
that's also correct. Orchestration is key. Each step must manipulate the state of the domain once, and once only, and leave it in a valid state, but it OK for there to be multiple steps.
Each of those individual points along the timeline are valid moments in the state of your domain.
So, back to your model. If you expose a single interface with multiple possible calls to all steps, then you leave yourself open to things being called out of order. Make this impossible or at least improbable. Orchestration is not just about what to do, but what to prevent from happening. Create smaller interfaces/classes to avoid accidentally increasing the "surface area" of what could be misused accidentally.
In this way, you are guiding the caller on what to do next by feeding them valid intermediate states. But, and this is the important part, the burden on what to call in what order is not on the caller. Sure, the caller could know what to do, but why force it.
Your basic algorithm is the same: upload, transform, download.
Is it the responsibility of the caller to invoke the methods in the right sequence?
Not exactly. Is the responsibility of the caller to choose from legitimate choices given the state of your domain. It's "your" responsibility to present these choices via business methods on your correctly modeled moment/interval aggregate suitable for the caller to use.
Or do we make the methods handle the dependent operations before it's own logic?
If you've setup orchestration correctly, this won't be necessary. But it does make sense to validate anyway.
On a side note, each step of the orchestration you do should be very linear in nature. I tell my developers to be suspicious of an orchestration step that has an if statement in it. If there's an if it's likely better to be part of another orchestration step or encapsulated in business logic.

Compensating Events on CQRS/ES Architecture

So, I'm working on a CQRS/ES project in which we are having some doubts about how to handle trivial problems that would be easy to handle in other architectures
My scenario is the following:
I have a customer CRUD REST API and each customer has unique document(number), so when I'm registering a new customer I have to verify if there is another customer with that document to avoid duplicity, but when it comes to a CQRS/ES architecture where we have eventual consistency, I found out that this kind of validations can be very hard to address.
It is important to notice that my problem is not across microservices, but between the command application and the query application of the same microservice.
Also we are using eventstore.
My current solution:
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%. That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
Altough this works, there are 2 things that bother me here, the first thing is my command application relying on the query application, so if my query application is down, my command is affected (today I just return false on my validation if query is down but still...) and second thing is, should a query/read model really be able to emit events? And if so, what is the correct way of doing it? Should the command have some kind of API for that? Or should the query emit the event directly to eventstore using some common shared library? And if I have more than one view/read? Which one should I choose to handle this?
Really hope someone could shine a light into these questions and help me this these matters.
For reference, you may want to be reviewing what Greg Young has written about Set Validation.
I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right?
That's exactly right - your read model is stale copy, and may not have all of the information collected by the write model.
That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
This spelling doesn't quite match the usual designs. The more common implementation is that, if we detect a problem when reading data, we send a command message to the write model, telling it to straighten things out.
This is commonly referred to as a process manager, but you can think of it as the automation of a human supervisor of the system. Conceptually, a process manager is an event sourced collection of messages to be sent to the command model.
You might also want to consider whether you are modeling your domain correctly. If documents are supposed to be unique, then maybe the command model should be using the document number as a key in the book of record, rather than using the customer. Or perhaps the document id should be a function of the customer data, rather than being an arbitrary input.
as far as I know, eventstore doesn't have transactions across different streams
Right - one of the things you really need to be thinking about in general is where your stream boundaries lie. If set validation has significant business value, then you really need to be thinking about getting the entire set into a single stream (or by finding a way to constrain uniqueness without using a set).
How should I send a command message to the write model? via API? via a message broker like Kafka?
That's plumbing; it doesn't really matter how you do it, so long as you are sure that the command runs within its own transaction/unit of work.
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%.
No, you cannot safely rely on the query side, which is eventually consistent, to prevent the system to step into an invalid state.
You have two options:
You permit the system to enter in a temporary, pending state and then, eventually, you will bring it into a valid permanent state; for this you could allow the command to pass, yield CustomerRegistered event and using a Saga/Process manager you verify against a uniquely-indexed-by-document-collection and issue a compensating command (not event!), i.e. UnregisterCustomer.
Instead of sending a command, you create&start a Saga/Process that preallocates the document in a uniquely-indexed-by-document-collection and if successfully then send the RegisterCustomer command. You can model the Saga as an entity.
So, in both solution you use a Saga/Process manager. In order for the system to be resilient you should make sure that RegisterCustomer command is idempotent (so you can resend it if the Saga fails/is restarted)
You've butted up against a fairly common problem. I think the other answer by VoicOfUnreason is worth reading. I just wanted to make you aware of a few more options.
A simple approach I have used in the past is to create a lookup table. Your command tries to register the key in a unique constraint table. If it can reserve the key the command can go ahead.
Depending on the nature of the data and the domain you could let this 'problem' occur and raise additional events to mark it. If it is something that's important to the business/the way the application works then you can deal with it either manually or at the time via compensating commands. if the latter then it would make sense to use a process manager.
In some (rare) cases where speed/capacity is less of an issue then you could consider old-fashioned locking and transactions. Admittedly these are much better suited to CRUD style implementations but they can be used in CQRS/ES.
I have more detail on this in my blog post: How to Handle Set Based Consistency Validation in CQRS
I hope you find it helpful.

How to use compensating measures in an CQRS and DDD based application

Let's assume we host two microservices: RealEstate and Candidate.
The RealEstate service is responsible for managing rental properties, landlords and so forth.
The Candidate service provides commands to apply for a rental property.
There would be a CandidateForRentalProperty command which requires the RentalPropertyId and all necessary Candidate information.
Now the crucial point: Different types of RentalPropertys require a different set of Candidate information.
Therefore the commands and aggregates got splitten up:
Commands: CandidateForParkingLot, CandidateForFlat, and so forth.
Aggregates: ParkingLotCandidature, FlatCandidature, and so forth.
The UI asks the read model to decide which command has to be called.
It's reasonable for me to validate the Candidate information and all the business logic involved with that in the Candidate domain layer, but leave out validation whether the correct command got called based on the given RentalPropertyId. Reason: Multiple aggregates are involved in this validation.
The microservice should be autonomous and it's read model consumes events from the RealEstate domain, hence it's not guaranteed to be up to date. We don't want to reject candidates based on that but rather use eventual consistency.
Yes, this could lead to inept Candidate information used for a certain kind of RentalProperty. Someone could just call the CandidateForFlat command with a parking lot rental property id.
But how do we handle the cases in which this happens?
The RealEstate domain does not know anything about Candidates.
Would there be an event handler which checks if there is something wrong and execute an appropriate command to compensate?
On the other hand, this "mapping" is domain logic and I'd like to accomodate it in the domain layer. But I don't know who's accountable for this kind of compensating measures. Would the Candidate aggregate be informed, like IneptApplicationTypeUsed or something like that?
As an aside - commands are usually imperative verbs. ApplyForFlat might be a better spelling than CandidateForFlat.
The pattern you are probably looking for here is that of an exception report; when the candidate service matches a CandidateForFlat message with a ParkingLot identifier, then the candidate service emits as an output a message saying "hey, we've got a problem here".
If a follow up message fixes the problem -- the candidate service gets an updated message that fixes the identifier in the CandidateForFlat message, or the candidate service gets an update from real estate announcing that the identifier actually points to a Flat, then the candidate service can emit another message "never mind, the problem has been fixed"
I tend to find in this pattern that the input commands to the service are really all just variations of handle(Event); the user submitted, the http request arrived; the only question is whether or not the microservice chooses to track that event. In other words, the "command" stream is just another logical event source that the microservice is subscribed to.
As you said, validation of commands should be performed at the point of command generation - at client side - where read models are available.
Command processing is performed by aggregate, so it cannot and should not check validity or existence of other aggregates. So it should trust a command issuer.
If commands comes from an untrusted environment like public API, then your API gateway becomes a client, and it should have necessary read models to validate references.
If you want to accept a command fast and check it later, then log events like ClientAppliedForParkingLot, and have a Saga/Process manager handle further workflow by keeping its internal state, and issuing commands like AcceptApplication or RejectApplication.
I understand the need for validation but I don't think the example you gave calls for cross-Aggregate (or cross-microservice for that matter) compensating measures as stated in the Q title.
Verifications like checking that the ID the client gave along with the flat rental command matches a flat and not a parking lot, that the client has permission to do that, and so forth, are legitimate. But letting the client create such commands in the wild and waiting for an external actor to come around and enforce these rules seems subpar because the rules could be made intrinsic properties of the object originating the process.
So what I'd recommend is to change the entry point into the operation - to create the Candidature Aggregate Root as part of another Aggregate Root's behavior. If that other Aggregate (RentalProperty in our case) lives in another Bounded Context/microservice, you can maintain a list of RentalProperties in the Candidate Bounded Context with just the amount of info needed, and initiate the Candidature from there.
So you would have
FlatCandidatureHandler ==loads==> RentalProperty ==creates==> FlatCandidature
or
FlatCandidatureHandler ==checks existence==> local RentalProperty data
==creates==> FlatCandidature
As a side note, what could actually necessitate compensating actions are factors extrinsic to the root object of the process. For instance, if the property becomes unavailable in the mean time. Then whatever Aggregate holds that information should emit an event when that happens and the compensation should be initiated.

Command Validation in DDD with CQRS

I am learning DDD and making use of the CQRS pattern. I don't understand how to validate business rules in a command handler without reading from the data store.
For example, Chris wants to give Ashley a gift.
The command might be GiveGiftCommand.
At what point would I verify Chris actually owns the gift he wants to give? And how would I do that without reading from the database?
There are different views and opinions about validation in command handlers.
Command can be rejected, we can say No to the command if it is not valid.
Typically you would have a validation that occurs on the UI and that might be duplicated inside the command handler (some people tend to put it in the domain too). Command handler then can run simple validation that can occur outside of the entity like is data in correct format, are there expected values, etc.
Business logic, on the other hand, should not be in a command handler. It should be in your domain.
So I think that the underlying question is...
Should I query the read side from Command Handlers?
I would say no. Do not use the read model in the command handlers or domain logic. But you can always query your read model from the client to fetch the data you need in for your command and to validate the command. You would query the read side on the client to check would if Chris actually owns the gift he wants to give. Of course, the validation involving a read model is likely to be eventually consistent, which is of course another reason a command could be rejected from the aggregate, inside the command handler.
Some people disagree saying that if you require your commands to contain the data the handler needs to validate your command, than you can never change the validation logic within your handler/domain without affecting the client as well. This exposes too much of the domain knowledge to the client and go against the fact that the client only wants to express an intent. So they would tend to provide an GiftService interface (which are part of the ubiquitous language) to your command handler and then implement the interface as needed - which might include querying the read side.
I think that the client should always assume that the commands it issues will succeed. Calling the read side to validate the command shouldn't be needed. Getting two contradictory commands is very unlikely (users creating accounts with the same email address). Then you should have a mean to issue a corrective action, something like for example a Saga/Process Manager. So instead making a corrective action would be less problematic that if the command could have been validated and not dispatched in the first place.
It depends if the operation is async or not i.e does the user expect some immediate response. Gift ownership is basically a security feature and that can be done as a 'prep' operation before invoking the actual service or sending the GiveGiftCommand.
The only command validation you can do is to make sure it contains the data in the required format (UI validation) and that the user has the permissions to do that action. But after the command is sent it's up to the Domain to decide if other business constraints are respected.
If the user expects some immediate feedback, you can actually 'wait' until the command is completed and for that you can use an approach where a command handler can provide a result to the sender using a mediator . But this implies that at least some commands are executed in an immediate manner and that might not be the case in your app. However, this is the simplest approach if you want to just return a message error as opposed to implementing compensations and other stuff. Some use cases are simple.
About command handlers and business logic, I disagree with Tomasz Jaskuλa . A command handler is a function, a technical detail. You can put business logic in a command handler or a static function, it doesn't matter. Messages and their handlers are infrastructural components that can be used to implement functionality. For example, in an app you can have Domain Events, Application Events etc . They're all events i.e notification that something changed and you can have event handlers that reside in Domain or in other places.
There's no rule preventing you to 'read' from the db, however at least the read model theoretically is stale. However this might not be such a problem in 99% of cases. For the rest 1% you need very specific solutions.
I just asked the exact same question from a knowledgeable friend of mine and his answer was that it is OK to do this validation inside the command handler (in my case inside an akka persistent actor that interprets commands and writes events to the journal).
However if that is not possible for performance reasons (because processing validation inside a persistent actor would block the actor and that would be a scalability bottleneck for the whole app) then optimistic locking (OCC) can be used.
In other words the validation can be performed in an other actor (lets call it the validator actor - which is not in the persistent actor) , this will not block the persistent actor but it might happen that the data that is used for the validation has changed in the persistent actor while the validation was running in the validator actor) .
If the validator actor returns with an OK and all the data that has been used for the validation is still the same in the persistent actor (has the same version - version as in OCC) as it has been at the point when the command arrived to the command handler (persistent actor) then the command is accepted by the persistent actor , otherwise the validation needs to be resubmitted for re-evaluation to the validator actor.

Where do you perform your validation?

Hopefully you'll see the problem I'm describing in the scenario below. If it's not clear, please let me know.
You've got an application that's broken into three layers,
front end UI layer, could be asp.net webform, or window (used for editing Person data)
middle tier business service layer, compiled into a dll (PersonServices)
data access layer, compiled into a dll (PersonRepository)
In my front end, I want to create a new Person object, set some properties, such as FirstName, LastName according to what has been entered in the UI by a user, and call PersonServices.AddPerson, passing the newly created Person. (AddPerson doesn't have to be static, this is just for simplicity, in any case the AddPerson will eventually call the Repository's AddPerson, which will then persist the data.)
Now the part I'd like to hear your opinion on is validation. Somewhere along the line, that newly created Person needs to be validated. You can do it on the client side, which would be simple, but what if I wanted to validate the Person in my PersonServices.AddPerson method. This would ensure any person I want to save would be validated and removes any dependancy on the UI layer doing the work. Or maybe, validate both in UI and in by business server layer. Sounds good so far right?
So, for simplicity, I'll update the PersonService.AddPerson method to perform the following validation checks
- Check if FirstName and LastName are not empty
- Ensure this new Person doesn't already exist in my repository
And this method will return True if all validation passes and the Person is persisted, False if Validation fails or if the Person is not persisted.
But this Boolean value that AddPerson returns isn't enough for me at the UI layer to give the user a clear reason why the save process failed. So what's a lonely developer to do? Ultimately, I'd like the AddPerson method to be able to ensure what its about to save is valid, and if not, be able to communicate the reasons why it's not invalid to my UI layer.
Just to get your juices flowing, some ways of solving this could be: (Some of these solutions, in my opinion, suck, but I'm just putting them there so you get an understanding of what I'm trying to solve)
Instead of AddPerson returning a boolean, it can return an int (i.e. 0 = Success, Non Zero equals failure and the number indicates the reason why it failed.
In AddPerson, throw custom exceptions when validation fails. Each type of custom exception would have its own error message. In addition, each custom exception would be unique enough to catch in the UI layer
Have AddPerson return some sort of custom class that would have properties indicating whether validation passed or failed, and if it did fail, what were the reasons
Not sure if this can be done in VB or C#, but attach some sort of property to the Person and its underlying properties. This "attached" property could contain things like validation info
Insert your idea or pattern here
And maybe another here
Apologies for the long winded question, but I definately like to hear your opinion on this.
Thanks!
Multiple layers of validation go well with multi-layer apps.
The UI itself can do the simplest and quickest checks (are all mandatory fields present, are they using the appropriate character sets, etc) to give immediate feedback when the user makes a typo.
However the business logic should have the lion's share of validation responsibilities... and for once it's not a problem if this is "repetitious", i.e., if the business layer re-checks something that should already have been checked in the UI -- the BL should check all the business rules (this double checks on UI's correctness, enables multiple different UI clients that may not all be perfect in their checks -- e.g. a special client on a smart phone which may not have good javascript, and so on -- and, a bit, wards against maliciously hacked clients).
When the business logic saves the "validated" data to the DB, that layer should perform its own checks -- DBs are good at that, and, again, don't worry about some repetition -- it's the DB's job to enforce data integrity (you might want different ways to feed data to it one day, e.g. a "bulk loader" to import a number of Persons from another source, and it's key to ensure that all those ways to load data always respect data integrity rules); some rules such as uniqueness and referential integrity are really best enforced in the DB, in particular, for performance reasons too.
When the DB returns an error message (data not inserted as constraint X would be violated) to the business layer, the latter's job is to reinterpret that error in business terms and feed the results to the UI to inform the user; and of course the BL must similarly provide clear and complete info on business rules violation to the UI, again for display to the user.
A "custom object" is thus clearly "the only way to go" (in some scenarios I'd just make that a JSON object, for example). Keeping the Person object around (to maintain its "validation problems" property) when the DB refused to persist it does not look like a sharp and simple technique, so I don't think much of that option; but if you need it (e.g. to enable "tell me again what was wrong" functionality, maybe if the client went away before the response was ready and needs to smoothly restart later; or, a list of such objects for later auditing, &c), then the "custom validation-failure object" could also be appended to that list... but that's a "secondary issue", the main thing is for the BL to respond to the UI with such an object (which could also be used to provide useful non-error info if the insertion did in fact succeed).
Just a quick (and hopefully helpful) comment: when you're wondering where to place validation, try pretending that, soon, you're going to completely recreate your UI layer using a technology you're not yet so familiar with**. Try to keep out of that layer any validation-like business logic that you know for certain you'd have to rewrite in the new technology.
You'll find exceptions - business logic that ends up in your UI layer regardless, but it's a useful consideration nonetheless.
** Mobile dev, Silverlight, Voice XML, whatever - pretending you don't know the technology of your "new" UI layer helps you abstract your concerns and get less mired in implementation details.
The only important points are:
From the perspective of the front-end(s), the Middle Tier must perform all validation, you never know whether someone is going to try circumventing your front-end validation by talking directly to your Middle Tier (for whatever reason)
The Middle Tier may elect to delegate some of that validation to the DB layer (e.g. data integrity constraints)
You may optionally duplicate some validation in the UI, but that should only be for the sake of performance (to avoid round-trips to the Middle Tier for common scenarios, such as missing mandatory fields, incorrectly formatted data, etc.) These checks should never take the place of doing them in the Middle Tier
Validation should be done at all three levels.
When I am in a project I assume I am making a framework, which most of the time is not the case. Each layer is separate and must check all layers input before doing an operation
Each level can have a different way of doing it, it is not necessary they all use the same, but ideally they should all use the same validation with the ability to customize it.
You never want to let bad data into the database. So you can never trust the data you are getting from the business layer. It needs to be checked.
In the business layer you can never trust the UI layer, and you must check it to prevent un-needed calls to the database layer. The UI layer works the same way.
I disagree with David Basarab's comment that the same validations should be present in all layers. This defies the paradigm of responsibility of layers for one reason. Secondly, though the main intention is to make the layers (or components) loosely coupled, it is also important that a level of responsibility (and hence trust) is endowed on the layers. Though it might be necessary to duplicated some validations in UI and Business Layer (since UI layer can be bypassed by hacking attempts), however, it is not advisable to repeat the validations in each layer. Each layer should perform only those validations which they are responsible for. The biggest flaw in repeting validations in all layers is code redundancy, which can cause maintenance nightmare.
A lot of this is more style than substance. I personally favor returning status objects as a flexible and extensible solution. I would say that I think there are a couple classes of validation in play, the first being "does this person data conform to the contract of what a person is?" and the second being "does this person data violate constraints in the database?" I think the first validation can, and should be done at the client. The second should be done at the middle tier. With this division, you may find that the only reasons the save could fail are 1)violates a uniqueness constrains, or 2)something catastrophic. You could then return false for the first case, and throw an exception for the other.
If tier R is closer to the user (or any input stream you don't control) than tier S then tier S should validate all data received from tier R. This does not mean that tier R shouldn't validate data. It's better for the user if the GUI warns him he's making a mistake before he attempts a new transaction. But no matter how bulletproof the validation in your GUI is, the next tier up should not trust that any validation has taken place.
This assumes your database in completely under your control. If not, you have bigger problems.
Also, you could have the UI pass the data needed to build a Person object through some sort of PersonBuilder object, so that object creation is consolidated in the domain/business layer, and you can keep the Person object in a state that is always consistent. This makes more sense for more complex entities, however even for simple ones, it is good to centralize object creation, just like you centralize persistence, etc.

Resources