What exceptions in asp.net 5 could/should be considered as "permanent" vs. "non permanent"? - .net-5

If an Eventhandler that updates a projection in the read model throws an exception, this is a bad thing.
In this case I would like do differentiate between "permanent" and "non permanent" exceptions.
With "permanent" Exceptions I mean Exceptions, that most likely are caused by wrong code and will be thrown again and again if I try to retry handling the event on this event handler.
With "non permanent" Exceptions I mean "temporary" e.g. Io/Network/... related Exceptions that are not caused by wrong code and which make sense to retry until the event is eventually handled successfully.
While I can come up with examples I would consider the one or the other (like InvalidOperation or IOException) is there any list or recommendation which exception (in this case in the asp.net 5 stack) should be considered what?

One option is to add a retry or circuit breaker policy using Polly that automatically retries all the exceptions (per read-model) and after X retries, do either disable the specific read projection that failed or the entire read-model.
The question is what you want to happen when the read model can't be updated. Do you:
Shut down the specific projection and allow the rest to be updated?
Shut down the entire read model
Or enter a "I am out of sync" mode?
I think this all depends on how you have constructed your read model and what your business case is.

Related

DDD dealing with Eventual consistency for multiple aggregates inside a bounded context with NoSQL

I am currently working on a DDD Geolocation application that has two separate aggregate roots inside one bounded context. Due to frequent coordinate updates I am using redis to persist my data which doesn't allow rollbacks.
My first aggregate root is a trip object containing driver (users), passengers (list of users), etc.
My second aggregate root is user position updates
When a coordinate update is sent I will generate and fire a "UpdateUserPostionEvent". As a side effect I will also generate and fire a "UpdateTripEvent" at a certain point, which will update coordinates of drivers/passengers.
My question is how can I deal with eventual consistency if I am firing my "UpdateLiveTripEvent" asynchronously. My UpdateLiveTripEventHandler has several points of failure and besides logging an error how can I deal with this inconsistency?
I am using a library called MediatR and the INotificationHandler which is as far as I know is "Fire and Forget"
Edit: Ended up finding this SO post that describes exactly what I need (saga/process manager) but unfortunately I am unable to find any kind of Saga implentation for handling events within the same BC. All examples I am seeing involve a sevice bus.
Same or different Bounded Context; with or without Sagas; it does not matter.
Why a event handling fail? Domain rules or Infrastructure.
Domain rules:
A raised event handled by an aggregate (the event handler use the aggregate to apply the event) should NEVER fail by Domain Rules.
If the "target" aggregate has Domain Rules that reject the event your aggregate design is wrong. Commands/Operations can be rejected by Domain rules. Events can not be rejected (nor Undo) by Domain rules.
A event should be raised when all domain rules to this operation was checked by the "origin" aggregate. The "target" aggregate apply the event and maybe raises another event with some values calculated by the "target" aggregate (domain rules, but not for reject the event; events are unrejectable by domain rules; but to "continue" the consistency "chain" with good responsibility segregation). That is the reason why events should have sentences in past as names; because already happened.
Event simulation:
Agg1: Hey buddies! User did this cool thing and everything seems to be OK. --> UserDidThisCoolThingEvent
Agg2: Woha, that is awesome! I'm gonna put +3 in User points. --> UserReceivedSomePointsEvent
Agg3: +3 points to this user? The user just reach 100 points. That is a lot! I'm gonna to convert this User into VIP User. --> UserTurnedIntoVIPEvent
Agg4: A new VIP User? Let's notify it to the rest of the Users to create some envy ;)
Infrastructure:
Fix it and apply the event. ;) Even "by hand" if needed once your persistence engine, network and/or machine is up again.
Automatic retries for short time fails. ErrorQueues/Logs to not loose your events (and apply it later) in a long time outage.
Event sourcing also helps with this because you can always reapply the persisted events in the "target" aggegate without extra effort to keep events somewhere (i.e. event logs) because your domain persistence is also your event store.

Error Handler with Flux

I have a React.js application that I am refactoring to use the Flux architecture, and am struggling to figure out how error handling should work while sticking to the Flux pattern.
Currently when errors are encountered, a jQuery event 'AppError' is triggered and a generic Error Handling helper that subscribes to this event puts a Flash message on the user's screen, logs to the console, and reports it via an API call. What is nice is I can trigger an error for any reason from any part of the application and have it handled in a consistant way.
I can't seem to figure out how to apply a similar paradigm with the Flux architecture. Here are the two particular scenarios I'm struggling with.
1) An API call fails
All of my API calls are made from action creators and I use a promise to dispatch an error event (IE 'LOAD_TODOS_FAILED') on failure. The store sees this event and updates it's state accordingly, but I still dont have my generic error behavior from my the previous iteration (notifications, etc).
Possible resolution:
I could create an ErrorStore that binds to the 'LOAD_TODOS_FAILED' action, but that means every time I have a new type of error, I need to explicitly add that action to the ErrorStore, instead of having all errors be automatically handled.
2) Store receives an unexpected action
This is the one I'm really confused about. I want to handle cases when an action is dispatched to a Store that does not make sense given the Store's current state. I can handle the error within the Store to clean up the state, but still may want to trigger an error that something unexpected happen.
Possible resolutions:
Dispatch a new action from the store indicating the error.
I believe Stores are not suppose to dispatch actions (let me know if I'm wrong), and I still have the same issue as with an API error above.
Create a ControllerView for Error Handling that subscribes to every Store
I could define an errors property on every store, then have a View watching every Store and only act on the errors property. When the errors property is not null, it could dispatch new actions, etc. The disadvantages are that I need to remember to add every Store to this view whenever new ones are created, and every store has to have an error property that behaves the same way. It also does nothing to address API call failures.
Does anyone have a suggested approach for a generic Error Handler that fits into the Flux architecture?
TL;DR
I need to handle errors in most Action Creators and Stores. How do I setup consistent error handling that will occur for any type of generic error?
API call fails
If you want to avoid listing every error action in the ErrorStore, you could have a generic APP_ERROR action, and have properties of that action that describe it in more detail. Then your other stores would simply need to examine those properties to see if the action is relevant to them. There is no rule that the registered callback in the stores needs to be focused on the action's type, or only on the type -- it's just often the most convenient and consistent way of determining if an action is relevant.
Store receives an unexpected action
Don't issue a new action in response to an action. This results in a dispatch-within-a-dispatch error, and would lead to cascading updates. Instead, determine what action should be dispatched ahead of time. You can query the stores before issuing an action, if that helps.
Your second solution sounds good, but the dangerous thing you mentioned is "When the errors property is not null, it could dispatch new actions, etc" -- again, you don't want to issue actions in response to other actions. That is the path of pain that Flux seeks to avoid. Your new controller-view would simply get the values from the stores and respond by presenting the correct view.

boost msm library newbi in firing events

When we call fsm.process_event('eventname');
is there a way to return true if the transition occured and false if "no_transition" was called or an exception occurred?
Thanks
Seeing as no one has answered so far I'll post my quite humble suggestion. You could try calling the current_state() method before and after calling fsm.process_event() and compare the results. This however would not cover the case of self transitions or internal transitions and is not something I would use if there are other alternatives (its a hack at best).
If you are trying to catch the case of an event not being handled by any state and just propagating through you could add one more bottom layer superstate which reports events that reach it (i.e. are ignored by all states they propagated through).
I have had situations where I needed to know if some event actually did something and when it did it (maybe it was deferred first and then executed). In that case I made my MSM post "ACK" messages to an outside queue, I'm not sure if this applies to your problem.
In my humble knowledge interrupts and state machines don't mix very well, I usually either simply swallow them or try and turn them into some event depending on the context. You should never allow you sates (the underlying function objects) to throw.

Validate object existence before deletion or update in Spring/Hibernate

I'm using Spring/Hibernate combination for my project, with standard CRUD operations.
I wonder, is it reasonable to validate object existence before its deletion or update? If it is, where is the most appropriate place to do - service or dao layer?
EDIT:
Sorry, I didn't make the point at first when I asked this question. The only motive for existence checking is to throw friendly exception to client of service (no DAO specific).
Problem is that I 'have to do' existence checking because my service method below is transactional, and besides that, I'm using HibernateTemplate and DaoSupport helper classes for session manipulation in DAO objects.
According to mentioned, Hibernate exception (in case of deleting non-existing instance for example) is thrown at commit time, which is out of my reach, because (I suppose) commit is executed by PlatformTransactionManager in proxy object, and I have no opportunity to handle that exception in my service method and re-throw some friendly exception to the client.
But even if I keep my strategy to check existence before deletion, bad stuff is that I have problems with NonUniqueObjectException in the case that instance exist, because I re-attach (in delete-time) already loaded instance (in read-time due existence checking).
For example:
//Existence checking in this example is not so important
public void delete(Employee emp){
Employee tempEmp=employeeDao.read(emp.getId());
if(tempEmp==null)
throw new SomeAppSpecificException();
}
//Existence checking in this example is maybe 'more logical'
public void save(Assignment a){
Task t=taskDao.read(a.getTask().getId());
if(t==null)
throw new FriendlyForeignKeyConstraintException();
Employee e=employeeDao.read(a.getEmployee().getId());
if(e==null)
throw new EmployeeNotFoundExc();
//...some more integrity checking and similar...
assignmentDao.save(a);
}
The point is that I just want to throw friendly exception with appropriate message in the case of mentioned situations (integrity violations and similar).
In Hibernate terms, both update and delete operations (and corresponding methods on Session) deal with persisted entities. Their existence is, therefore, implied and verifying it again in your code is kind of pointless - Hibernate will do that on its own and throw an exception if that's not the case. You can then catch and handle (or rethrow) that exception.
On a side note (based on your sample code), it's not necessary to explicitly read the instance you're going to delete. Session provides load() method that will return proxy instance suitable for passing to delete() method without actually hitting the database. It assumes that instance associated with given PK exists and will fail (throw an exception) later if that's not the case.
Edit (based on question clarification):
When you say you want to throw "friendly" exception to the client, the definitions of "friendly" and "client" become important. In most real-life scenarios your transaction would span across more than a simple atomic "save()" or "delete()" method on one of your services.
If the client is local and you don't need separate transactions within a single client interaction (typical scenario - web app running in the same VM with service layer), it's usually a good idea to initiate / commit transaction there (see Open Session In View, for example). You can then catch and properly handle (including wrapping / re-throwing, if needed) exceptions during commit. Other scenarios are more complicated, but ultimately the exception will be propagated to your "top level" client; it's just that unwrapping it may prove to be complicated if you need to present the "root" cause to the client in a "friendly" way.
The bottom line, though, is that it's up to you. If you'd rather fail fast with your own exception (at the expense of writing some boilerplate code), there's nothing really wrong with that.

I Need an Analogy: Triggers and Events

For another question, I'm running into a misconception that seems to arise here at SO occasionally. Some questioners seem to think that Triggers are to Databases as Events are to OOP.
Does anyone have a good analogy to explain why this is a flawed comparison, and the consequences of misapplying it?
EDIT:
Bill K. has hit it correctly, but maybe doesn't see the importance of the critical differeence between the event and the callback function that strikes me, anyway. Triggers actually cause code to execute every time the event occurs; callbacks only occur whenever one has been registered for an event (which is not true for the vast majority of events); and even then, in most cases the callback's first action is to deregister itself (or at least the callback contains a qualifcation exit so it only executes once.)
If you write a trigger, it will unfailingly execute every time the event occurs, because there's no way to register or deregister to code segment.
Triggers are a way to interpose repeating logic synchronously into the thread of execution (i.e. synchronicity). Events are a means to defer logic until later (i.e. implement asynchronicity).
There are exceptions and mitigations in both cases, but the basic patterns of triggers and callbacks are mostly opposite in intention and implementation. Often the distinction doesn't seem to have fully sunk in. (IMHO, YMMV). :D
They're not the same thing, but they're not unrelated.
In both cases, the mechanism can be described approximately as follows:
Some block of code declares "interest" for changes in state.
Your application affects some change.
The system runs the block of code in response to the change.
Perhaps a database trigger is more like a callback function that has registered interest in a specific event.
Here's an analogy: the event is a rubber ball that you throw. The trigger is a dog that chases after a thrown ball.
If there's some other difference that you have in mind that makes it "dangerous" (note: OP has edited this choice of word out of the question) to compare triggers and events, you can describe what you mean.
Triggers are a way to interpose
repeating logic synchronously into the
thread of execution (i.e.
synchronicity). Events are a means to
defer logic until later (i.e.
implement asynchronicity).
Okay, I see what you mean more clearly. But I think it's in some ways subject to the implementation. I wouldn't assume an event handler has to deregister itself; it depends on the system you're using. A UNIX signal handler, for example, has to prevent itself from catching a new signal while it's already handling one. But a Java servlet inside a Tomcat container should be thread-safe because it may be called concurrently by multiple threads. They're both event handlers, of different kinds.
Event handlers may be synchronous or asynchronous. Can a handler in a publish/subscribe system read messages that were posted recently, but prior to the handler registering its interest? Or only messages posted concurrently?
There's another important reason to treat triggers as different from event handlers: I frequently recommend against doing anything in a trigger that affects state outside the database.
For example, sending an email, writing to a file, posting to a web service, or forking a process is inappropriate inside a trigger. If for no other reason than the transaction that spawned the trigger may be rolled back, but you can't roll back those external effects. You may not even be using explicit transactions, but say you send an email in a BEFORE trigger, but the operation fails because of a NOT NULL constraint or something.
Instead, all such work should be done by code in one's application, after one has confirmed that the SQL operation was successful and the transaction committed.
It's too bad that people keep trying to do inappropriate work inside a trigger. There are senior developers at MySQL who promote UDFs to read and write data in memcached. Wow -- I just noticed these have made it into the MySQL 6.0 product!! Shocking!
So here's another attempt at an analogy, comparing triggers and events to the process of a criminal trial:
A BEFORE trigger is an allegation.
An AFTER trigger is an indictment.
COMMIT is a conviction after a guilty verdict.
ROLLBACK is an acquittal after an innocent verdict.
You only want to put the perpetrator in prison after they are convicted.
Whereas an EVENT is the crime itself.

Resources