I am starting to use Spring Statemachine and I am having some trouble managing the state of my objects.
My Statemachine is of type StateMachine.
My business object, Shipment, has an enum property (state) of type ShipmentState, which should hold the state-machine state of the episode. Here is my desired workflow:
Load a Shipment from the database.
Set the current state of the Statemachine from the ShipmentState that
is in that Shipment instance.
Send an event to the Statemachine.
Get the resultant state from the Statemachine (post event) and set
the ShipmentState in my Shipmentinstance.
Save the Shipment instance.
The problem is: How do I set the current state of an existing StateMachine?
My current approach is this one: For every event, create a new StateMachine instance (using a StateMachineBuilder) specifying the initial state according to a Shipment instance. For example:
#Service
public class StateMachineServiceImpl implements IStateMachineService {
#Autowired
private IShipmentService shipmentService;
#Override
public StateMachine<ShipmentState, ShipmentEvent> getShipmentStateMachine(Shipment aShipment) throws Exception {
Builder<ShipmentState, ShipmentEvent> builder = StateMachineBuilder.builder();
builder.configureStates().withStates()
.state(ShipmentState.S1)
.state(ShipmentState.S2)
.state(ShipmentState.S3)
.initial(shipmentService.getState())
.end(ShipmentState.S4);
builder.configureTransitions().withExternal().source(ShipmentState.S1).target(ShipmentState.S1)
.event(ShipmentEvent.S3).action(shipmentService.updateAction()).and().withExternal()
.source(ShipmentState.S1).target(ShipmentState.S2).event(ShipmentEvent.S3)
.action(shipmentService.finalizeAction()).and().withExternal().source(ShipmentState.S3)
.target(ShipmentEvent.S4).action(shipmentService.closeAction()).event(ShipmentEvent.S5);
return builder.build();
}
}
What do you think of my approach?
There is no issue with the approach. You can reset the state machine to particular state using the below code.
stateMachine.getStateMachineAccessor().doWithAllRegions(access -> access
.resetStateMachine(new DefaultStateMachineContext<>(state, null, null,null)));
You can pass the arguments to the DefaultStateMachineContext according to your use case.
Related
In my Spring boot application i need to read data from a specific schema and write on another one, to do so i follow this guide (https://github.com/spring-projects/spring-data-examples/tree/main/jpa/multitenant/schema) and i used this answer (https://stackoverflow.com/a/47776205/10857151) to be able to change at runtime the schema used.
But if this works fine inside a service without any transaction scope, this doesn't works on a more complex architecture (exception: session/EntityManager is closed) where there are couple of service that share transaction to ensure rollback.
THE BELLOW IS A SIMPLE EXAMPLE OF THE ARCHITECTURE
//simple jpa repository
private FirstRepository repository;
private SecondRepository secondRepository;
private Mapper mapper;
private SchematUpdater schemaUpdater;
#Transactional
public void entrypoint(String idSource,String idTarget) {
//copy first object
firstCopyService(idSource, idTarget);
//copy second object
secondCopyService(idSource, idTarget);
}
#Transactional
public void firstCopyService(String idSource,String idTarget) {
//change schema to the source default
schemaUpdater.changeToSurceSchema();
Object obj=repository.get(idSource);
//convert obj before persist - set new id reference and other things
obj=mapper.prepareObjToPersist(obj,idTarget);
//change schema to the target default
schemaUpdater.changeToTargetSchema();
repository.saveAndFlush(obj);
}
#Transactional
public void secondCopyService(String idSource,String idTarget) {
schemaUpdater.changeToSurceSchema();
Object obj=secondRepository.get(idSource);
//convert obj before persist
obj=mapper.prepareObjToPersist(obj);
//change schema to the target default
schemaUpdater.changeToTargetSchema();
secondRepository.saveAndFlush(obj);
}
I need to know what could be the best solution to ensure this dynamical switch and maintain the transaction scope on each service, without causing problems connected to restore and clean entity manager session.
Thanks
I would like to propagate JTA state (= the transaction) between a transactional REST endpoint that emits a message to a reactive-messaging connector.
#Inject
#Channel("test")
Emitter<String> emitter;
#POST
#Transactional
public Response test() {
emitter.send("test");
}
and
#ApplicationScoped
#Connector("test")
public class TestConnector implements OutgoingConnectorFactory {
#Inject
TransactionManager tm;
#Override
public SubscriberBuilder<? extends Message<?>, Void> getSubscriberBuilder(Config config) {
return ReactiveStreams.<Message<?>>builder()
.flatMapCompletionStage(message -> {
tm.getTransaction(); // = null
return message.ack();
})
.ignore();
}
}
As I understand, context-propagation is responsible for making the transaction available (see io.smallrye.context.jta.context.propagation.JtaContextProvider#currentContext). The problem seems to be, that currentContext gets created on subscription, which happens when the injection point (Emitter<String> emitter) get its instance. Which is too early to properly capture the transaction.
What am I missing?
By the way, I am having the same problem when using #Incoming / #Outgoing instead of the emitter. I have decided to give you this example because it is easy to understand and reproduce.
At the moment, you need to pass the current Transaction in the message metadata. Thus, it will be propagated to your different downstream components (as well as the connector).
Note that, Transaction tends to be attached to the request scope, which means that in your connector, it may already be too late to use it. So, make sure your endpoint is asynchronous and only returns when the emitted message is acknowledged.
Context Propagation is not going to help in this case as the underlying streams are built at startup time (at build time in Quarkus) so, there are no capture contexts.
We're using an #EntityListener class to act on some changes to the repository.
Similary, is there a way to listen to changes that occur on a join table?
Example : service_vehicles table in db
We have services and vehicles tables and we can assign vehicles to services (many to one)
The following #EntityListener is not triggered when I add a vehicle to a service.
#PostPersist
#PostUpdate
#PostRemove
private void afterAnyOperation(Object object) {
LOG.debug("Handling entity change for obj:{}", object);
}
I think it should be possible to register a listener via a config property hibernate.ejb.event.post-collection-update and set a class name that implements PostCollectionUpdateEventListener.
I am working on project where we use a state machine to realize a workflow. I am having some troubles getting warm with what was put in place, and I would like to see if there may be a better design/implementation to my problem.
I will try to show what we have at the moment.
Please ignore process_agent at the moment, I would like to focus on process_state only for the beginning. I simply want to create a process and the state machine shall immediately transition from CREATED to ASSIGNED and persist that state in the Entity table (by default I would simply set the current user as the agent for the time being).
There is a table Entity with two information: process_agent and process_state
There are only three States for the moment, defined as Enums: CREATED, ASSIGNED and IN_PROCESS
There are only two Events at the moment, defined as Enums: ASSIGN_TO_AGENT and START_PROCESS
There is an endpoint in the controller for the creation of a process that simply hands over to the service:
// In the Controller
// mapper is a MapStruct mapper, it simply copies fields from view to entity and vice versa
ResponseEntity<EntityView> create(#RequestBody final EntityView view) {
final Entity createdEntity = service.create(entityView);
final EntityView createdEntityView = mapper.toView(createdEntity); //map the entity to its view
return status(CREATED).body(createdEntityView);
}
// In the Service
// mapper is a MapStruct mapper, it simply copies fields from view to entity and vice versa
// stateHandler is a custom class to handle an event, see below
Entity entity = new Entity();
mapper.updateFromView(entityView, entity);
entity.setInitState(CREATED);
final Message<Event> message = MessageBuilder.withPayload(Event.ASSIGN_TO_AGENT).setHeader("ENTITY_HEADER", entity);
stateHandler.handleEvent(message);
entity.setProcessAgent(...get the current user's id somehow...);
...
return entity;
StateHandler handles the event messaging. That is the part that I find difficult and feel I should question. One basically gets a state machine, resets it to the given state and runs it in order to intercept a transition; once intercepted the new target state is persisted to the table of the entity:
// stateMachineFactory is auto wired into the state handler
// repository is auto wired in the state handler
public void handleEvent(final Message<Event> message) {
final Entity entity = message.getHeaders().get("ENTITY_HEADER", Entity.class);
final State currentState = entity.getProcessState();
StateMachine<State, Event> machine = stateMachineFactory.getStateMachine();
machine.getStateMachineAccessor().doWithAllRegions(accessor -> accessor.resetStateMachine(
new DefaultStateMachineContext<State, Event>(currentState, null, null, null, null)
));
machine.getStateMachineAccessor().doWithAllRegions(accessor -> accessor.addStateMachineInterceptor(
#Override
public StateContext<State, Event> postTransition(final StateContext<State, Event> stateContext) {
final Entity entity1 = stateContext.getMessage().getHeaders.get("ENTITY_HEADER", Entity.class);
if (entity != null) {
entity1.setState(stateContext.getTarget().getId());
repository.save(entity1);
return stateContext;
}
// if entity is null then throw exception
... omitted exception handling
}
);
log.debug("Starting state machine to process [{}]", entity);
stateMachine.start();
stateMachine.sendEvent(message);
stateMachine.stop();
}
For completeness the following StateMachineConfig:
#Override
public void configure(final StateMachineConfigurationConfigurer<State, Event> config) throws Exception {
config.withConfiguration()
.autoStartup(false);
}
#Override
public void configure(final StateMachineStateConfigurer<State, Event> sates) throws Exception {
states.withStates()
.initial(State.CREATED)
.states(EnumSet.allOf(State.class));
}
#Override
public void configure(final StateMachineTransitionConfigurer<State, Event> transitions) throws Exception {
transitions.withExternal()
.source(State.CREATED)
.target(State.ASSIGNED)
.event(Event.ASSIGN_TO_AGENT)
.and()
.withExternal()
.source(State.ASSIGNED)
.target(State.IN_PROCESS)
.event(Event.START_PROCESS);
}
I hope I could be as complete as possible. Please let me know if there are any clarifications needed.
My question is: Is there a better design to implement this state machine, or is what one can see here a reasonable approach ?
I would guess your workflow is bound to "agent" - so an agent starts a workflow and you want to persist the state of the workflow per agent. Now I don't know if an agent can start multiple workflow instances and progress them in parallel, so the below suggestion might need to be re-adjusted for those cases.
The straight forward approach would be to have a SM instance per agent (and per workflow if there are multiple workflow instances possible).
When an agent starts working with a workflow you must identify if it is a completely new workflow or an existing one in a particular state.
if it is a new workflow - return a new SM on a starting state and send the required event.
if it is an existing workflow, you need to create a SM and feed to the SM the current workflow state and do the necessary transitions upon SM initialization, before returning it to the service caller. The state should be previously persisted to a datastore.
I don't know your domain, so the state could be persisted as part of some Workflow entity or Agent entity or something else - depends on the app context.
There are different approaches on who is persisting the state in the DB.
A) The SM can be responsible for this (e.g. upon receiving an event the SM will extract the necessary state information and context (e.g. DB entity ID) from the event and persist it in the DB for that Entity ID and then transition to the next state).
B) A Service XYZ that is "orchestrating" the SM can be responsible for this (e.g. a Service XYZ calls "persist" on another repository service and if it is a successful operation, then the Service XYZ send the necessary event to the SM - then the SM only handles the transition to the next state).
I want to update a member variable of an object inside my Repository on a LiveData- Object. The problem is, that if I call the getValue() Method, I keep getting an NullPointerException, although the value does exist inside my Room- Library.
My question now is, how do I get the value from the LiveData Object without calling the observe() Method? (I am not able to call the observe method inside my repository, cause that method wants me to enter a LifeCycleOwner- reference, which is not present inside my repository).
Is there any way to get the value out of the LiveData- object?
My architecture looks like that:
ViewModel --> Repository --> Dao
You need to initialize LiveData object in ViewModel before observing it in Activity/Fragment like this
ProductViewModel.java
public ProductViewModel(DataRepository repository, int productId) {
mObservableProduct = repository.loadProduct(mProductId);
}
public LiveData<ProductEntity> getObservableProduct() {
return mObservableProduct;
}
Here observableProduct is LiveData for observing product details which is initialized in constructor and fetched using getObservableProduct() method
Then you can observe the LiveData in Activity/Fragment like this
MainActivity.java
productViewModel.getObservableProduct().observe(this, new Observer<ProductEntity>() {
#Override
public void onChanged(#Nullable ProductEntity productEntity) {
mProduct = productEntity;
}
});
As you already setup your code architecture like
Flow of LiveData is
DAO -> Repository -> ViewModel -> Fragment
You don't need to observe LiveData in repository because you cannot update UI from there. Observe it from Activity instead and update UI from there.
As you are saying its giving null on getValue(), make sure you are updating db and fetching db from single instance of DAO as per I worked with DAO it will not notify db update of one DAO instance to 2nd DAO instance with LiveData
Also you can observeForever as suggested by #Martin Ohlin, but it will not be lifecycle aware and may lead to crashes. Check your requirement before observing forever
Refer to this for Full LiveData Flow
Refer to this for DAO issues
Edit 1 - Without using LifecycleOwner
You can use void observeForever (Observer<T> observer) (reference) method to observe LiveData without providing any LifecycleOwner as I provided by using this context in above example.
This is how you can observe LiveData without providing any LifecycleOwner and observe the LiveData in repository itself
private void observeForeverProducts() {
mDatabase.productDao().loadAllProducts().observeForever(new Observer<List<ProductEntity>>() {
#Override
public void onChanged(#Nullable List<ProductEntity> productEntities) {
Log.d(TAG, "onChanged: " + productEntities);
}
});
}
But you need to call removeObserver(Observer) explicitly to stop observing the LiveData which was automatically done in previous case with LifecycleOwner. So as per documentation
You should manually call removeObserver(Observer) to stop observing this LiveData. While LiveData has one of such observers, it will be considered as active.
As this doesn't require LifecycleOwner you can call this in Repository without using this parameter as you mentioned which is missing in your repository
In order for the LiveData object works well you need to use the observe method. That is if you want to use the getValue() method and expecting a non-null response you need to use the observe method. Make sure initialize the LiveData object in your ViewModel as #adityakamble49 said in his answer. For initialize the object, you can pass the reference of your LiveData object which was created in your Repository:
ViewModel.java
private LiveData<Client> clientLiveData;
private ClientRepository clientRepo;
public ViewModel(ClientRepository clientRepo) {
this.clientRepo = clientRepo;
clientLiveData = clientRepo.getData();
}
Then you have to observe your ViewModel from the Activity and call the method that you want to update in your ViewModel (or Repo, but remember that Repo conects with the ViewModel and ViewModel with the UI: https://developer.android.com/jetpack/docs/guide ):
Activity.java
viewModel.getClient().observe(this, new Observer<Client>() {
#Override
public void onChanged(#Nullable Client client) {
viewModel.methodWantedInViewModel(client);
}
});
I hope it helps.
I'm not sure exactly what you are trying to accomplish here, but it is possible to observe without a LifeCycleOwner if you use
observeForever instead of observe.
Livedata is used to observe the data streams. In case you want to call the get a list of your entities stored within the Live Data. Something like this can be helpful.
public class PoliciesTabActivity extends AppCompatActivity {
private PolicyManualViewModel mViewModel;
private List<PolicyManual> policyManualList;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_leaves_tab_manager);
mViewModel = ViewModelProviders.of(PoliciesTabActivity.this).get(PolicyManualViewModel.class);
//Show loading screen untill live data onChanged is triggered
policyManualList = new ArrayList<>();
mViewModel.getAllPolicies().observe(this, new Observer<List<PolicyManual>>() {
#Override
public void onChanged(#Nullable List<PolicyManual> sections) {
//Here you got the live data as a List of Entities
policyManualList = sections;
if (policyManualList != null && policyManualList.size() > 0) {
Toast.makeText(PoliciesTabActivity.this, "Total Policy Entity Found : " + policyManualList.size(), Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(PoliciesTabActivity.this, "No Policy Found.", Toast.LENGTH_SHORT).show();
}
}
});
}
}
One more thing - for others with a similar problem - be aware that live data queries will execute only if there is a live observer (i.e. view listening for updates). It won't fill itself just by "laying" there in declarations, like this:
val myLiveData = repository.readSomeLiveData ()
So make sure that you are observing somewhere your LiveData object, either in view or through Transformations.