I have an application in which i should validate different kinds of things on the same object. So it came in my mind to use chain of resposibility design pattern, problem is that this patters defines that if certain object in chain fails, the client will be notified right away while i want to go thought all the objects (validator) in chain so each one of these will return result (passed/failed + exception).
var validator = new Validator(dataObject)
validator.Validate();
Is it acceptable use case or is there any better way to do it?
As your validation according to your requirements should not return early, this is less like a chain, but rather a list. Which simply can be iterated over or be processed parallel.
Decorators can be used to combine validators, too.
Related
I have a Spring Boot application and I want to decide which path to take in the code based on the request data. In the request are e.g. attributes like business process, which I always want to query and based on that I decide what kind of response will be built later.
I thought about using a Strategy pattern in combination with a Builder pattern.
On the other hand, a Chain of Responsibility with a Strategy Pattern would also make sense in my eyes.
I'm new to Redux-Saga, so please assume very shaky foundational knowledge.
In Redux, I am able to define an action and a subsequent reducer to handle that action. In my reducer, i can do just about whatever i want, such as 'delete all' of a specific state tree node, eg.
switch action.type
...
case 'DESTROY_ALL_ORDERS'
return {
...state,
orders: []
}
However, it seems to me (after reading the docs), that reducers are defined by Saga, and you have access to them in the form of certain given CRUD verb prefixes with invocation post fixes. E.g.
fetchStart, destroyStart
My instinct is to use destroyStart, but the method accepts a model instance, not a collection, i.e. it only can destroy a given resource instance (in my case, one Order).
TL;DR
Is there a destroyStart equivalent for a group of records at once?
If not, is there a way i can add custom behavior to the Saga created reducers?
What have a missed? Feel free to be as mean as you want, I have no idea what i'm doing but when you are done roasting me do me a favor and point me in the right direction.
EDIT:
To clarify, I'm not trying to delete records from my database. I only want to clear the Redux store of all 'Order' Records.
Two key bit's of knowledge were gained here.
My team is using a library called redux-api-resources which to some extent I was conflating with Saga. This library was created by a former employee, and adds about as much complexity as it removes. I would not recommend it. DestroyStart is provided by this library, and not specifically related to Saga. However the answer for anyone using this library (redux-api-resources) is no, there is no bulk destroy action.
Reducers are created by Saga, as pointed out in the above comments by #Chad S.. The mistake in my thinking was that I believed I should somehow crack open this reducer and fill it with complex logic. The 'Saga' way to do this is to put logic in your generator function, which is where you (can) define your control flow. I make no claim that this is best practice, only that this is how I managed to get my code working.
I know very little about Saga and Redux in general, so please take these answers with a grain of salt.
I was thinking about patterns which allow me to return both computation result and status:
There are few approaches which I could think about:
function returns computation result, status is being returned via out parameter (not all languages support out parameters and this seems wrong, since in general you don't expect parameters to be modified).
function returns object/pair consisting both values (downside is that you have to create artificial class just to return function result or use pair which have no semantic meaning - you know which argument is which by it's order).
if your status is just success/failure you can return computation value, and in case of error throw an exception (look like the best approach, but works only with success/failure scenario and shouldn't be abused for controlling normal program flow).
function returns value, function arguments are delegates to onSuccess/onFailure procedures.
there is a (state-full) method class which have status field, and method returning computation results (I prefer having state-less/immutable objects).
Please, give me some hints on pros, cons and situations' preconditions of using aforementioned approaches or show me other patterns which I could use (preferably with hints on preconditions when to use them).
EDIT:
Real-world example:
I am developing java ee internet application and I have a class resolving request parameters converting them from string to some business logic objects. Resolver is checking in db if object is being created or edited and then return to controller either new object or object fetched from db. Controller is taking action based on object status (new/editing) read from resolver. I know it's bad and I would like to improve code design here.
function returns computation result, status is being returned via out
parameter (not all languages support out parameters and this seems
wrong, since in general you don't expect parameters to be modified).
If the language supports multiple output values, then the language clearly was made to support them. It would be a shame not to use them (unless there are strong opinions in that particular community against them - this could be the case for languages that try and do everything)
function returns object/pair consisting both values (downside is that
you have to create artificial class just to return function result or
use pair which have no semantic meaning - you know which argument is
which by it's order).
I don't know about that downside. It seems to me that a record or class called "MyMethodResult" should have enough semantics by itself. You can always use such a class in an exception as well, if you are in an exceptional condition only of course. Creating some kind of array/union/pair would be less acceptable in my opinion: you would inevitably loose information somewhere.
if your status is just success/failure you can return computation
value, and in case of error throw an exception (look like the best
approach, but works only with success/failure scenario and shouldn't
be abused for controlling normal program flow).
No! This is the worst approach. Exceptions should be used for exactly that, exceptional circumstances. If not, they will halt debuggers, put colleagues on the wrong foot, harm performance, fill your logging system and bugger up your unit tests. If you create a method to test something, then the test should return a status, not an exception: to the implementation, returning a negative is not exceptional.
Of course, if you run out of bytes from a file during parsing, sure, throw the exception, but don't throw it if the input is incorrect and your method is called checkFile.
function returns value, function arguments are delegates to
onSuccess/onFailure procedures.
I would only use those if you have multiple results to share. It's way more complex than the class/record approach, and more difficult to maintain. I've used this approach to return multiple results while I don't know if the results are ignored or not, or if the user wants to continue. In Java you would use a listener. This kind of operation is probably more accepted for functinal languages.
there is a (state-full) method class which have status field, and
method returning computation results (I prefer having
state-less/immutable objects).
Yes, I prefer those to. There are producers of results and the results themselves. There is little need to combine the two and create a stateful object.
In the end, you want to go to producer.produceFrom(x): Result in my opinion. This is either option 1 or 2a, if I'm counting correctly. And yes, for 2a, this means writing some extra code.
My inclination would be to either use out parameters or else use an "open-field" struct, which simply contains public fields and specifies that its purpose is simply to carry the values of those fields. While some people suggest that everything should be "encapsulated", I would suggest that if a computation naturally yields two double values called the Moe and Larry coefficients, specifying that the function should return "a plain-old-data struct with fields of type double called MoeCoefficient and LarryCoefficient" would serve to completely define the behavior of the struct. Although the struct would have to be declared as a data type outside the method that performs the computation, having its contents exposed as public fields would make clear that none of the semantics associated with those values are contained in the struct--they're all contained in the method that returns it.
Some people would argue that the struct should be immutable, or that it should include validation logic in its constructor, etc. I would suggest the opposite. If the purpose of the structure is to allow a method to return a group of values, it should be the responsibility of that method to ensure that it puts the proper values into the structure. Further, while there's nothing wrong with a structure exposing a constructor as a "convenience member", simply having the code that will return the struct fill in the fields individually may be faster and clearer than calling a constructor, especially if the value to be stored in one field depends upon the value stored to the other.
If a struct simply exposes its fields publicly, then the semantics are very clear: MoeCoefficient contains the last value that was written to MoeCoefficient, and LarryCoefficient contains the last value written to LarryCoefficient. The meaning of those values would be entirely up to whatever code writes them. Hiding the fields behind properties obscures that relationship, and may impede performance as well.
I have a Polling system. I want to be able to calculate results based on filtered subsets of votes. To do this, I'm allowing the caller to pass a subquery to my model, which is then used to select only a subset of the votes when calculating results.
The catch is that I want to cache my results, and queries are flexible to the point of being horrible keys in a cache hash. This is where I come to you: How should I pass filters into my result methods in order to strike the best balance between good code practice, reliable caching (i.e. the coder can easily understand when things are being cached), and filter flexibility.
Option 1: Suck it up and live with query hashes
pass the $voteFilter into each method so that the methods look something like this:
class Poll {
getResults($voteFilter) {...} // Returns the poll results for passed filter
getWinner($voteFilter) {...} // Returns the winning result for passed filter
isTie($voteFilter) {...} // Returns tie status for passed filter
}
The methods would check their caches and if that filter query had been used, it just uses those results. This is risky because you could have the same result set generated by slightly different queries (i.e. the order of a reflective logical clause is swapped). I feel this means a coder could accidentally not be using a cache when he/she intends to.
This also feels like I'm passing a lot back and forth when I don't need to -- presumably the coder will be working with a single filter set across all of the result methods (When I want the results and the winner and the tie status, it will probably be for the same vote filter at any given moment)
Option 2: Set the filter using a separate class method
Pass the $voteFilter that is currently being used using a setFilter() method, starting a result session. This would reset the cache each time it is called and would dictate the filter used in result methods until the next time setFilter is called. Looking like this:
class Poll {
setVoteFilter($voteFilter) {...} // Clears cache and sets vote filter
getResults() {...} // Returns the poll results for current filter
getWinner() {...} // Returns the winning result for current filter
isTie() {...} // Returns tie status for current filter
}
This option feels more elegant and I like it, but I don't know if it is bad form and is the kind of thing I'll look at in two months and say "this is horrible. Why would I have made methods without explicit parameters"
Option 3: Define a more rigid filtering technique
If I limit the ways I can filter I can create filter parameters which have no room for confusion, solving the ambiguity issues in Option 1. This limits flexibility and could result in a less understandable API. I like this choice least but wanted to throw it out here for consideration in case someone has a strong thought.
Anyone have insight? Other options?
Imagine that Poll is immutable with a signature like:
class Poll {
withFilter(filter)
getFilter(...)
getResults(...)
getWinner(...)
}
Now, withFilter returns a Poll object that has the given filter applied (perhaps cumulative, however as you note, care must be taken -- e.g. an AST or context must be handle or filter must fall into a certain class of restrictions). The object returned can be a new Poll object or a cached Poll object -- if Poll is immutable it doesn't matter. If the cache is maintained entirely with reach-ability then this may also handle "cleanup" -- but that really depends upon language.
Happy coding.
I often see people validating domain objects by creating rule objects which take in a delegate to perform the validation. Such as this example": http://www.codeproject.com/KB/cs/DelegateBusinessObjects.aspx
What I don't understand is how is this advantageous to say just making a method?
For example, in that particular article there is a method which creates delegates to check if the string is empty.
But is that not the same as simply having something like:
Bool validate()
{
Result = string.IsNullOrEmpty(name);
}
Why go through the trouble of making an object to hold the rule and defining the rule in a delegate when these rules are context sensitive and will likely not be shared. the exact same can be achieved with methods.
There are several reasons:
SRP - Single Responsibility Principle. An object should not be responsible for its own validation, it has its own responsibility and reasons to exist.
Additionally, when it comes to complex business rules, having them explicitly stated makes validation code easier to write and understand.
Business rules also tend to change quite a lot, more so than other domain objects, so separating them out helps with isolating the changes.
The example you have posted is too simple to benefit from a fully fledged validation object, but it is very handy one systems get large and validation rules become complex.
The obvious example here is a webapp: You fill in a form and click "submit". Some of your data is wrong. What happens?
Something throws an exception. Something (probably higher up) catches the exception and prints it (maybe you only catch UserInputInvalidExceptions, on the assumption that other exceptions should just be logged). You see the first thing that was wrong.
You write a validate() function. It says "no". What do you display to the user?
You write a validate() function which returns (or throws an exception with, or appends to) a list of messages. You display the messages... but wouldn't it be nice to group by field? Or to display it beside the field that was wrong? Do you use a list of tuple or a tuple of lists? How many lines do you want a rule to take up?
Encapsulating rules into an object lets you easily iterate over the rules and return the rules that were broken. You don't have to write boilerplate append-message-to-list code for every rule. You can stick broken rules next to the field that broke them.