How to use spring state machine with nested state machine - spring

Good Day,
I just started learning spring state machine.
I have the following questions
I will like to know how to configure a state machine that uses a nested state machine.
How can this be done programmatically i.e. via state machine builder?
How can this be done via papyrus UML?
My second question is on how to fire events i.e. upon getting to the state that has the nested state machine. How can events be a trigger in the nested state machine?
My third question is how to exit a nested state machine by firing an event that moves from the parent state (i.e. the state that references the nested state machine)
to another state in the parent state machine.
I would really appreciate a reference to some examples.

After studying the javadoc and reading a few links
https://github.com/spring-projects/spring-statemachine/issues/121
I figured it out.
Programmatically
Configure the state and transitions for the parent state machine as usual
https://www.baeldung.com/spring-state-machine
Follow that link to see how.
For States that reference a nested state machine. See the below snippet
....
enter code here
*builder.configureStates()
.withStates()
.initial("contactList2")
.state("newContactSM", newContactSM())
.end("end1");*
....
The state "newContactSM" references a nested state machine. The nested state machine
is define
....
*
public StateMachine<String, String> newContactSM() throws Exception
{
logger.info(" ------ newContactSM() -------- ");
// checkCurrentFlow();
Builder<String, String> builder = StateMachineBuilder.builder();
builder.configureConfiguration().withConfiguration().machineId("newContactBTF");
logger.info(" configure states ..");
builder.configureStates()
.withStates()
.initial("newContact")
.end("end2")
.states(new HashSet<String>(Arrays.asList("otherContact"))); // (Arrays.asList("S1", "S2", "S3")));
logger.info(" states configured ! ");
........ //
}
enter code here
....
To do it via UML
Just ensure that you reference the nested state machine in the state "newContactSM".
Once the set up is done. You can fire events as normal. spring state machine handles the rest.

Related

resetStateMachine does not clear its id

I am using a pooled list of StateMachine instances (at present limited to one) and am switching the context that statemachine is working with - however the StateMachine ID is never updated and I end up overwriting my statemachine in the db when i try to persist. More info on how below as well as the question.
My question is why upon calling resetStateMachine (in AbstractStateMachine.java) with a null context (ie trying to create a new context) does this not clear out the current id of the machine (I understand why UUID stays - that is unique to the machine) but id relates to the context also, no? If the context is not null it tries to pull the id from the stateMachineContext
Extracts of relevant sources:
If context is null:
log.info("Got null context, resetting to initial state and clearing extended state");
this.currentState = this.initialState;
this.extendedState.getVariables().clear();
If context is not null:
this.setId(stateMachineContext.getId());
When I later call persist.restore to pull back a state machine context this means I have an old id present and end up overwriting rather than using a new id to persist with.
This is using currently released version 1.2.5.RELEASE
Yes, don't see any reason why we would not clear id as well if null context is passed. Would you mind creating a github issue to track that change request?

Spring Statemachine Forks

I have made good progress with the state machines upto now. My most recent problem arised when I wanted to use a fork, (I'm using UML). The fork didn't work as it is supossed to and I think its because of the persistance. I persist my machine in redis. refer below image.
This is my top level machine where Manage-commands is a Sub machine Reference And the top region is as it is.
Now say I persisted some state in redis, from the below region, and next an ONLINE event comes, then the machine does not accept the event, clearly because I have asked the machine to restore the state from redis with a given key.
bur I want both the regions to be persisted so that either one is selected according to the event.
Is there any way to achieve this?
Below is how I persist n restore
private void feedMachine(StateMachine<String, String> stateMachine, String user, GenericMessage<String> event)
throws Exception {
stateMachine.sendEvent(event);
System.out.println("persist machine --- > state :" + stateMachine.getState().toString());
redisStateMachinePersister.persist(stateMachine, "testprefixSw:" + user);
}
private StateMachine<String, String> resetStateMachineFromStore(StateMachine<String, String> stateMachine,
String user) throws Exception {
StateMachine<String, String> machine = redisStateMachinePersister.restore(stateMachine, "testprefixSw:" + user);
System.out.println("restore machine --- > state :" + machine.getState().toString());
return machine;
}
It's a bit weird as I found some other issues with persistence which I fixed in 1.2.x. Probably not related to your issues but I would have expected you to see similar errors. Anyway could you check RedisPersistTests.java and see if there's something different what you're doing. I didn't yet try sub-machine refs but I should not make any difference from persistence point of view.

Parallel execution in spring state machine

I'm trying to construct a state machine from thee follwoing UML model using Papyrus.
The entryActions for each of the stage is resgistered using DefaultStateMachineComponentResolver , to resolve to respective EntryAction classes in my spring app.
My requirement is
1) From the CS stage the execution should fork to two threads, upon getting the triggering event SUCCESS.
2) In one thread DE1 and TE1 should execute sequentially and in the other thread DE2 and TE2 should execute sequentially
3) Transition to END state should happen only if both the threads execute successfully,
ie from TE1 a transition should happen to join state signalled by event SUCCESS and from TE2 a transition should happen to join state signalled by event SUCCESS
4) ie. Transition to END state should happen after the successful execution of the 2 threads.
5) While executing each stage if any tasks fails (tasks are written in EntryAction classes),
the state machine should navigate to END state , the signals used being FAILURE, TERMINATED (based on the severity of error occured)
Here is the code I used to buildState machine and trigger execution
Builder<String, String> builder = StateMachineBuilder
.<String, String> builder();
builder.configureConfiguration()
.withConfiguration()
.autoStartup(false)
.listener(listener())
.beanFactory(
this.applicationContext.getAutowireCapableBeanFactory());//.taskExecutor(taskExecutor());
DefaultStateMachineComponentResolver<String, String> resolver = new DefaultStateMachineComponentResolver<>();
resolver.registerAction("startEntryAction", this.startEntryAction);
resolver.registerAction("apEntryAction", this.apEntryAction);
resolver.registerAction("psEntryAction", this.psEntryAction);
//all entry action classed are registered
...
...
UmlStateMachineModelFactory umlStateMachineModelFactory = new UmlStateMachineModelFactory("classpath:model.uml");
umlStateMachineModelFactory.setStateMachineComponentResolver(resolver);
builder.configureModel().withModel().factory(umlStateMachineModelFactory);
StateMachine<String, String> stateMachine = builder.build();
stateMachine.start()
Issues I faced
1) While using taskExecutor the state machine execution was not getting started.
2)After commenting out the taskExecutor the exectuion was triggered and in the console I got the logs from entryAction classes.
3)In each entry action classes I just added the below code to transit to next state, and for logging purpose
#Override
public void execute(StateContext<String, String> paramStateContext) {
LOGGER.debug("Start State entered! ");
paramStateContext.getStateMachine().sendEvent("SUCCESS");
}
4) But the problem is the state TE1 was never entered, after analysing the logs.
My requirement was END state should be entered after executing the tasks in TE1EntryAction and TE2EntryAction
Please find below the logs
[![enter image description here][1]][1]19:03:54.963 DEBUG o.i.r.p.a.StartEntryAction - Start State entered!
19:03:55.007 DEBUG o.i.r.p.a.APEntryAction - AP State entered!
19:03:55.007 DEBUG o.i.r.p.a.PSEntryAction - PS State entered!
19:03:55.007 DEBUG o.i.r.p.a.PBEntryAction - PB State entered!
19:03:55.007 DEBUG o.i.r.p.a.CSEntryAction - CS State entered!
19:03:55.007 DEBUG o.i.r.p.a.DE1EntryAction - DE1 State entered!
19:03:55.007 DEBUG o.i.r.p.a.DE2EntryAction - DE2 State entered!
19:03:55.007 DEBUG o.i.r.p.a.TE2EntryAction - TE2 State entered!
19:03:55.023 DEBUG o.i.r.p.a.EndStateEntryAction - END State entered!
Is the problem exists in the UML model I created.
if so, how should be the state Diagram look like
Thanks a ton for the help.
At least your fork/join is wrong as you can't have those without using orthogonal regions.
Source for above is in simple-forkjoin.uml
I'm not sure why papyrus allows you to draw fork/join like that as uml spec clearly states:
Fork Pseudostates serve to split an incoming Transition into two or more Transitions terminating on
Vertices in orthogonal Regions of a composite State.
I should probably add model verifier if user is trying to add forks/joins without using regions.
Also I'm not sure what should happen with transition from TE2 to END if machine is waiting join to happen so I'd try to avoid that.

spring entity concurrency control while persisting into database

I am trying to control concurrent access to same object in spring+jpa configuration.
For Example, I have an entity named A. Now multiple processes updating the same object of A.
I am using versioning field but controlling it but here is the issue:
For example 2 processes reads the same entity (A) having version=1.
Now one process update the entity and version gets incremented.
when 2nd process tries to persist the object, Optimistic lock exception would be thrown.
I am using spring services and repository to access the objects.
Could you please help me here?
What's the problem then? That's how it's supposed to work.
You can catch the JpaOptimisticLockingFailureException and then decide what to do from there.
This, for example, would give a validation error message on a Spring MVC form:
...
if(!bindingResult.hasErrors()) {
try {
fooRepository.save(foo);
} catch (JpaOptimisticLockingFailureException exp){
bindingResult.reject("", "This record was modified by another user. Try refreshing the page.");
}
}
...

Relation between command handlers, aggregates, the repository and the event store in CQRS

I'd like to understand some details of the relations between command handlers, aggregates, the repository and the event store in CQRS-based systems.
What I've understood so far:
Command handlers receive commands from the bus. They are responsible for loading the appropriate aggregate from the repository and call the domain logic on the aggregate. Once finished, they remove the command from the bus.
An aggregate provides behavior and an internal state. State is never public. The only way to change state is by using the behavior. The methods that model this behavior create events from the command's properties, and apply these events to the aggregate, which in turn call an event handlers that sets the internal state accordingly.
The repository simply allows loading aggregates on a given ID, and adding new aggregates. Basically, the repository connects the domain to the event store.
The event store, last but not least, is responsible for storing events to a database (or whatever storage is used), and reloading these events as a so-called event stream.
So far, so good.
Now there are some issues that I did not yet get:
If a command handler is to call behavior on a yet existing aggregate, everything is quite easy. The command handler gets a reference to the repository, calls its loadById method and the aggregate is returned. But what does the command handler do when there is no aggregate yet, but one should be created? From my understanding the aggregate should later-on be rebuilt using the events. This means that creation of the aggregate is done in reply to a fooCreated event. But to be able to store any event (including the fooCreated one), I need an aggregate. So this looks to me like a chicken-and-egg problem: I can not create the aggregate without the event, but the only component that should create events is the aggregate. So basically it comes down to: How do I create new aggregates, who does what?
When an aggregate triggers an event, an internal event handler responses to it (typically by being called via an apply method) and changes the aggregate's state. How is this event handed over to the repository? Who originates the "please send the new events to the repository / event store" action? The aggregate itself? The repository by watching the aggregate? Someone else who is subscribed to the internal events? ...?
Last but not least I have a problem understanding the concept of an event stream correctly: In my imagination, it's simply something like an ordered list of events. What's of importance is that it's "ordered". Is this right?
The following is based on my own experience and my experiments with various frameworks like Lokad.CQRS, NCQRS, etc. I'm sure there are multiple ways to handle this. I'll post what makes most sense to me.
1. Aggregate Creation:
Every time a command handler needs an aggregate, it uses a repository. The repository retrieves the respective list of events from the event store and calls an overloaded constructor, injecting the events
var stream = eventStore.LoadStream(id)
var User = new User(stream)
If the aggregate didn't exist before, the stream will be empty and the newly created object will be in it's original state. You might want to make sure that in this state only a few commands are allowed to bring the aggregate to life, e.g. User.Create().
2. Storage of new Events
Command handling happens inside a Unit of Work. During command execution every resulting event will be added to a list inside the aggregate (User.Changes). Once execution is finished, the changes will be appended to the event store. In the example below this happens in the following line:
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
3. Order of Events
Just imagine what would happen, if two subsequent CustomerMoved events are replayed in the wrong order.
An Example
I'll try to illustrate the with a piece of pseudo-code (I deliberately left repository concerns inside the command handler to show what would happen behind the scenes):
Application Service:
UserCommandHandler
Handle(CreateUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Create(cmd.UserName, ...)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Handle(BlockUser cmd)
stream = store.LoadStream(cmd.UserId)
user = new User(stream.Events)
user.Block(string reason)
store.AppendToStream(cmd.UserId, stream.Version, user.Changes)
Aggregate:
User
created = false
blocked = false
Changes = new List<Event>
ctor(eventStream)
isNewEvent = false
foreach (event in eventStream)
this.Apply(event, isNewEvent)
Create(userName, ...)
if (this.created) throw "User already exists"
isNewEvent = true
this.Apply(new UserCreated(...), isNewEvent)
Block(reason)
if (!this.created) throw "No such user"
if (this.blocked) throw "User is already blocked"
isNewEvent = true
this.Apply(new UserBlocked(...), isNewEvent)
Apply(userCreatedEvent, isNewEvent)
this.created = true
if (isNewEvent) this.Changes.Add(userCreatedEvent)
Apply(userBlockedEvent, isNewEvent)
this.blocked = true
if (isNewEvent) this.Changes.Add(userBlockedEvent)
Update:
As a side note: Yves' answer reminded me of an interesting article by Udi Dahan from a couple of years ago:
Don’t Create Aggregate Roots
A small variation on Dennis excellent answer:
When dealing with "creational" use cases (i.e. that should spin off new aggregates), try to find another aggregate or factory you can move that responsibility to. This does not conflict with having a ctor that takes events to hydrate (or any other mechanism to rehydrate for that matter). Sometimes the factory is just a static method (good for "context"/"intent" capturing), sometimes it's an instance method of another aggregate (good place for "data" inheritance), sometimes it's an explicit factory object (good place for "complex" creation logic).
I like to provide an explicit GetChanges() method on my aggregate that returns the internal list as an array. If my aggregate is to stay in memory beyond one execution, I also add an AcceptChanges() method to indicate the internal list should be cleared (typically called after things were flushed to the event store). You can use either a pull (GetChanges/Changes) or push (think .net event or IObservable) based model here. Much depends on the transactional semantics, tech, needs, etc ...
Your eventstream is a linked list. Each revision (event/changeset) pointing to the previous one (a.k.a. the parent). Your eventstream is a sequence of events/changes that happened to a specific aggregate. The order is only to be guaranteed within the aggregate boundary.
I almost agree with yves-reynhout and dennis-traub but I want to show you how I do this. I want to strip my aggregates of the responsibility to apply the events on themselves or to re-hydrate themselves; otherwise there is a lot of code duplication: every aggregate constructor will look the same:
UserAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
OrderAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
ProfileAggregate:
ctor(eventStream)
foreach (event in eventStream)
this.Apply(event)
Those responsibilities could be left to the command dispatcher. The command is handled directly by the aggregate.
Command dispatcher class
dispatchCommand(command) method:
newEvents = ConcurentProofFunctionCaller.executeFunctionUntilSucceeds(tryToDispatchCommand)
EventDispatcher.dispatchEvents(newEvents)
tryToDispatchCommand(command) method:
aggregateClass = CommandSubscriber.getAggregateClassForCommand(command)
aggregate = AggregateRepository.loadAggregate(aggregateClass, command.getAggregateId())
newEvents = CommandApplier.applyCommandOnAggregate(aggregate, command)
AggregateRepository.saveAggregate(command.getAggregateId(), aggregate, newEvents)
ConcurentProofFunctionCaller class
executeFunctionUntilSucceeds(pureFunction) method:
do this n times
try
call result=pureFunction()
return result
catch(ConcurentWriteException)
continue
throw TooManyRetries
AggregateRepository class
loadAggregate(aggregateClass, aggregateId) method:
aggregate = new aggregateClass
priorEvents = EventStore.loadEvents()
this.applyEventsOnAggregate(aggregate, priorEvents)
saveAggregate(aggregateId, aggregate, newEvents)
this.applyEventsOnAggregate(aggregate, newEvents)
EventStore.saveEventsForAggregate(aggregateId, newEvents, priorEvents.version)
SomeAggregate class
handleCommand1(command1) method:
return new SomeEvent or throw someException BUT don't change state!
applySomeEvent(SomeEvent) method:
changeStateSomehow() and not throw any exception and don't return anything!
Keep in mind that this is pseudo code projected from a PHP application; the real code should have things injected and other responsibilities refactored out in other classes. The ideea is to keep aggregates as clean as possible and avoid code duplication.
Some important aspects about aggregates:
command handlers should not change state; they yield events or
throw exceptions
event applies should not throw any exception and should not return anything; they only change internal state
An open-source PHP implementation of this could be found here.

Resources