I am making mulitplayer quiz like game. I have chosen to use spring state machines to model each individual instance of the game on the server using #EnableStateMachineFactory. But, I need every instance of the state machine to have additional game data/state info, and to init that data on the state machine startup with some custom startup data (like player usernames for example). Is ExtendedState intended for such stuff and if it is how to send custom initial extended state data when creating the state machine with factory?
Yes ExtendedState is only way to store data within a machine itself. I've used it like that so it's ok.
Order to initialize ExtendedState I'd use machine's initial action which is executed when initial state entry logic happens. In UML machine model it's purpose by definition is to init machine.
Initial State
Related
I was looking at spring state machine (spending a small amount of time evaluating, before being moved onto another project).
I wanted to use papyrus and UML modeling for an Order Flow. This worked. I had a REST interface working. I expanded to look at the persistence demo and created a number of state machines using a cross-reference id.
I used thymeleaf to show the various orders, their states and send events.
This all seemed to work UNTIL any one of the state machines entered a "Final State" (The one that looks like a bullseye). At this point the AbstractPersistStateMachineHandler stopped triggering/listening and the onPersist no longer fired.
Is there an issue with using a "final state" and the persistence (https://docs.spring.io/spring-statemachine/docs/3.2.0/reference/#statemachine-recipes-persist) approach?
If i reworked it to just ensure this last state was a "normal" state (but with no exists) then it worked fine, but from a state model perspective probably doesnt accurately show that we have reached the end of the lifecyle.
Alot of what i did would have been based around: \spring-statemachine\spring-statemachine-samples\datapersist
Is there a way in which I can define global transitions/actions on a Spring state machine?
The expectation is, I'll not be transitioning to any other state. But just accept the event and perform the action and stay in the current state.
Turned out that there is no support for global transition. We developed a layer on top of Spring State Machine(SSM), which could be configured with JSON. From that, we had a possibility to define the global transition. And, when translating to SSM objects, the global transition will be added to all the states.
Is there a hook or API in the providers API for custom Terraform providers to know when all the resource mutations have been completed? I'd like to call an additional endpoint of the service to signal completion.
In the Terraform model, each operation in the plan is expected to be a complete and standalone action that leaves the system in a consistent state where the externally-visible side effects have occurred and no other transient state is present. This design is particularly important for configurations that blend resources from several different providers together, where the other provider could depend on an intermediate result being externally-visible already, such as:
Create a thing X in system A using provider A
Create a thing Y in system B that depends on thing X using provider B
Create a thing Z in system A that depends on thing Y using provider A
For many target systems, this comes "for free" as a natural side-effect of REST-like API design: the mutation operations themselves are designed to transition the system between consistent states, and the new state is visible immediately after the operation is completed. (with some caveats about eventual consistency, etc)
This model is trickier for systems that have a more complex lifecycle, such as expecting a batch of changes to be made before "committing" them to make their side-effects visible. The usual way to model these in Terraform is to make each separate change delimit itself, rather than try to delimit the overall set of changes.
In other words, each separate action should itself be considered its own "transaction", which should be fully applied and all of its transient state ended before the provider returns.
In some particularly tricky systems it can be necessary to serialize these "transactions" because only one can be open at a time. In that case, a provider can use a normal Go mutex to ensure that only one operation from that provider can run at a time. This will decrease Terraform's ability to run actions concurrently, but that is unavoidable if the remote system requires serialization of actions.
Note also that users can write more than one configuration for the same provider using alternate provider configurations, and in that case your provider plugin would be started up multiple times at once with no built-in ability to coordinate actions between them. In that case, the actions for one provider instance will run concurrently with actions for another, because these instances are entirely separate from one another (a separate OS process running the same program).
I'm developing small CQRS+ES framework and develop applications with it. In my system, I should log some action of the client and use it for analytics, statistics and maybe in the future do something in domain with it. For example, client (on web) download some resource(s) and I need save date, time, type (download, partial,...), from region or country (maybe IP), etc. after that in some view client can see count of download or some complex report. I'm not sure how to implement this feather.
First solution creates analytic context and some aggregate, in each client action send some command like IncreaseDownloadCounter(resourced) them handle the command and raise domain event's and updating view, but in this scenario first download occurred and after that, I send command so this is not really command and on other side version conflict increase.
The second solution is raising event, from client side and update the view model base on it, but in this type of handling my event not store in event store because it's not raise by command and never change any domain context. If is store it in event store, no aggregate to handle it after fetch for some other use.
Third solution is raising event, from client side and I store it on other database may be for each type of event have special table, but in this manner of event handle I have multiple event storage with different schema and difficult on recreating view models and trace events for recreating contexts states so in future if I add some domain for use this type of event's it's difficult to use events.
What is the best approach and solution for this scenario?
First solution creates analytic context and some aggregate
Unquestionably the wrong answer; the event has already happened, so it is too late for the domain model to complain.
What you have is a stream of events. Putting them in the same event store that you use for your aggregate event streams is fine. Putting them in a separate store is also fine. So you are going to need some other constraint to make a good choice.
Typically, reads vastly outnumber writes, so one concern might be that these events are going to saturate the domain store. That might push you towards storing these events separately from your data model (prior art: we typically keep the business data in our persistent book of record, but the sequence of http requests received by the server is typically written instead to a log...)
If you are supporting an operational view, push on the requirement that the state be recovered after a restart. You might be able to get by with building your view off of an in memory model of the event counts, and use something more practical for the representations of the events.
Thanks for your complete answer, so I should create something like the ES schema without some field (aggregate name or type, version, etc.) and collect client event in that repository, some offline process read and update read model or create command to do something on domain space.
Something like that, yes. If the view for the client doesn't actually require any validation by your model at all, then building the read model from the externally provided events is fine.
Are you recommending save some claim or authorization token of the user and sender app for validation in another process?
Maybe, maybe not. The token describes the authority of the event; our own event handler is the authority for the command(s) that is/are derived from the events. It's an interesting question that probably requires more context -- I'd suggest you open a new question on that point.
My idea is to keep track of states of a domain object by spring statemachine. i.e. statemachine defines how to transit states of the domain object. When the events are persisted/restored to/from the event store, the state of the domain object can be (re)generated by sending events to the statemachine.
However, it seems that creating a statemachine object is relatively expensive, it's not that performant to create a state-machine object whenever a state transition happened on a domain object. If I only maintain a statemachine object, I would worry about concurrency problems. One approach is to have a 'statemachine-pool', but it gets messy if I have to create statamachines for multiple different domain objects.
So is it a good idea to apply spring statemachine with event sourcing pattern?
Provided that all the transitions are based on events I would say that it is a pretty good idea, yes.
The fundamental idea of Event Sourcing is that of ensuring every change to the state of an application is captured in an event object, and that these event objects are themselves stored in the sequence they were applied for the same lifetime as the application state itself.
The main point about event sourcing is that you store the events leading to a particular state - instead of just storing the current state - so that you can replay them up to a given point of time.
Thus, using event sourcing has no impact on how you create your state machines.
However, it seems that creating a state-machine object is relatively expensive, it's not that performant to create a state-machine object whenever a state transition happened on a domain object.
Creating a state-machine every time there is a state transition is not related with event sourcing. Would you do it differently if you were only storing the current state? You'd still need to either create the state-machine from the last stored state - or look it up in a cache or a pool - before you could apply the transition.
The only performance hit derived from using event sourcing would be that of replaying the transitions from the beginning in order to reach the current state. Now, if this is costly you can use snapshots to minimize the amount of transitions that must be replayed.