Spring statemachine how to persist a machine with nested regions - spring

I have statemachine with configuration mentioned at the end, which i want to persist in the database. I am following this tutorial https://docs.spring.io/spring-statemachine/docs/3.1.0/reference/#statemachine-examples-datajpamultipersist to persist it.
However, when my statemachine is in PARALLEL_TASKS state then i see only one row in database
is it not suppose to show 3 rows (1 for parent state PARALLEL_TASKS and 2 for sub-states UNLOCKING_EXCESSIVE_POINTS_STARTED, PROCESSING_PAYMENT_STARTED)?
Can someone please tell me how can i fix it? what is wrong with my configuration?
#Configuration
#EnableStateMachineFactory(name = "SampleConfig")
#Qualifier("SampleConfig")
public class SampleConfig extends EnumStateMachineConfigurerAdapter<OrderState, OrderEvent> {
#Autowired
private JpaPersistingStateMachineInterceptor<OrderState, OrderEvent, String> persister;
#Override
public void configure(StateMachineStateConfigurer<OrderState, OrderEvent> states) throws Exception {
states
.withStates()
.initial(OrderState.ORDER_CREATED)
.state(OrderState.ORDER_CREATED)
.state(OrderState.PARALLEL_TASKS)
.end(OrderState.ORDER_COMPLETED)
.and()
.withStates()
.parent(OrderState.PARALLEL_TASKS)
.region("R1")
.initial(OrderState.UNLOCKING_EXCESSIVE_POINTS_STARTED)
.state(OrderState.UNLOCKING_EXCESSIVE_POINTS_STARTED)
.state(OrderState.UNLOCKED_EXCESSIVE_POINTS)
.and()
.withStates()
.parent(OrderState.PARALLEL_TASKS)
.region("R2")
.initial(OrderState.PROCESSING_PAYMENT_STARTED)
.state(OrderState.PROCESSING_PAYMENT_STARTED)
.state(OrderState.PROCESSED_PAYMENT)
;
}
#Override
public void configure(StateMachineTransitionConfigurer<OrderState, OrderEvent> transitions) throws Exception {
transitions
.withExternal()
.source(OrderState.ORDER_CREATED)
.target(OrderState.PARALLEL_TASKS)
.event(OrderEvent.ORDER_SUBMITTED_EVENT)
.and()
.withExternal()
.source(OrderState.UNLOCKING_EXCESSIVE_POINTS_STARTED)
.target(OrderState.UNLOCKED_EXCESSIVE_POINTS)
.event(OrderEvent.UNLOCKED_POINTS_SUCCESS)
.and()
.withExternal()
.source(OrderState.PROCESSING_PAYMENT_STARTED)
.target(OrderState.PROCESSED_PAYMENT)
.event(OrderEvent.PAYMENT_PROCESSED_SUCCESS);
}
#Override
public void configure(StateMachineConfigurationConfigurer<OrderState, OrderEvent> config) throws Exception {
config.withConfiguration()
.autoStartup(false)
.regionExecutionPolicy(RegionExecutionPolicy.PARALLEL)
.and()
.withPersistence()
.runtimePersister(persister)
;
}
}

Ok, I really don't like to say this because I am big fan of Spring State Machine but to tell you the truth persistence is not one of its big fortes, it is most of the time giving me the feeling, it is done to be able to say 'look we have persistence capablities'.
State Machines can be quite complex (look to the UML Specification) and I don't believe persistance layer is designed to deal with all of these complexities.
If you are using Spring State Machine for modelling Shopping Charts, Booking, etc let me ask what would be your (Spring State Machine's) strategy when your business case evolves, what happens when you have to restore a State Machine persisted with the previous release and in this release your business evolved and you have complete different State Machine design.
I found the following framework does much better with above mentioned problems, Akka Finite State Machine with it's Event / Schema Evolution capabilities. It has also a separate Persistance layer that you have much better control so you will not suffer scenarios as you mentioned above.
Now what you describe above seems like a bug in Spring State Machine and you might get a fix for it, but I strongly advice you to check Akka Framework before you invest more your project resource to Spring State Machine, Spring State Machine is awesome for some problems but not for everything.
Now when you read the Akka Documentation, you will see that it is written in Scala and it is more natural to program your State Machine with Scala (it is possible to program with Java but it gets too clumsy ), if you are reluctant to program in Scala, it is possible to program it as Scala / Java hybrid as I demonstrated in these blogs blog1, blog2.
Event / Schema Evolution capabilities is also demonstrated in the following blog3.
I know this is not answer for what you are asking but as a developer probably walk the same road as you, I like to give a fair warning.

Related

OptaPlanner threads are not getting released in SpringBoot application

We are using OptaPlanner(8.2.0) library in Spring Boot to solve knapsack problem using construction heuristic algorithm.
While running the application we observed that threads created by SolverManager are not getting released even after solving the problem. Because of that, performance of the application starts degrading after some time. Also, solver manager starts responding slowly of the increased thread count.
We also tried with latest version(8.17.0) but issue still persist.
Termination conditions:
<termination>
<millisecondsSpentLimit>200</millisecondsSpentLimit>
</termination>
optaplanner:
solver:
termination:
best-score-limit: 0hard/*soft
Code:
#Component
#Slf4j
public class SolutionManager {
private final SolverManager<Solution, String> solutionManager;
public SolutionManager(SolverManager<Solution, String> solutionManager) {
this.solutionManager = solutionManager;
}
public Solution getSolutionResponse(String solutionId, Solution unsolvedProblem)
throws InterruptedException, ExecutionException {
SolverJob<Solution, String> solve = solutionManager.solve(solutionId, unsolvedProblem);
Solution finalBestSolution = solve.getFinalBestSolution();
return finalBestSolution;
}
}
Thread metrics:
I wasn't able to reproduce the problem; after a load represented by solving several datasets in parallel, the number of threads drops back to the same value as before the load started.
The chart you shared doesn't clearly suggest there is a thread leak either; if you take a look at ~12:40 PM and compare it with ~2:00 PM, the number of threads actually did decrease.
Let me also add that the getFinalBestSolution() method actually blocks the calling thread until the solver finishes. If you instead use solve(ProblemId_ problemId, Solution_ problem, Consumer<? super Solution_> finalBestSolutionConsumer), this method returns immediately and the Consumer you provide is called when the solver finishes.
It looks like you might not be using OptaPlanner Spring Boot Starter.
If that's the case, upgrade to a recent version of OptaPlanner and add a dependency to optaplanner-spring-boot-starter. See the docs spring quickstart and the optaplanner-quickstarts repository (in the directory technology) for an example how to use it.

spring jms - perform action before message received

Is it possible to perform an action before a jms message is received in spring boot? I know I could put it at the very top of my #JmsListener, but I have several listeners and I'd prefer avoiding adding a call to all of them.
I'm trying to use the logging MDC (a threadlocal, if you're not familiar with the MDC,) to track various things and I'd like to set some properties before beginning to process the message. I can do this on my controllers with a Filter, but does spring jms have the same concept?
I would try to start with an Before or Around (in case there should be some logic implemented after handling message as well) aspect:
#Before("#annotation(JmsListener)")
public void handle(ProceedingJoinPoint joinPoint) { ... }
#Around("#annotation(JmsListener)")
public void handle(ProceedingJoinPoint joinPoint) { ... }
Couple of links: enabling aspectj support, before advice, around advice.

How can I improve the performance of the JbossFuse (v6.3) DSL route code?

APPLICATION INFO:
Code below: reads from IBM MQ queue and then posts the message to a REST service
(note: reading from the MQ queue is fast and not an issue - rather, it is the post operation performance I am having trouble improving)...
PROBLEM:
Unable to output/post more than 44-47 messages per second...
QUESTION:
How can I improve the performance of the JbossFuse (v6.3) DSL route code below?... (What techniques are available that would make it faster?)
package aaa.bbb.ccc;
import org.apache.camel.Exchange;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.cdi.ContextName;
#ContextName("rest-dsl")
public class Netty4HttpSlowRoutes extends RouteBuilder {
public Netty4HttpSlowRoutes() {
}
private final org.apache.camel.Processor proc1 = new Processor1();
#Override
public void configure() throws Exception {
org.apache.log4j.MDC.put("app.name", "netty4HttpSlow");
System.getProperties().list(System.out);
errorHandler(defaultErrorHandler().maximumRedeliveries(3).log("***FAILED_MESSAGE***"));
from("wmq:queue:mylocalqueue")
.log("inMessage=" + (null==body()?"":body().toString()))
.to("seda:node1?concurrentConsumers=20");
from("seda:node1")
.streamCaching()
.threads(20)
.setHeader(Exchange.HTTP_METHOD, constant(org.apache.camel.component.http4.HttpMethods.POST))
.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
.toD("netty4-http:http://localhost:7001/MyService/myServiceThing?textline\\=true");
}
}
Just a couple of thoughts. First things first: did you measure the slowness? How much time do you spend in Camel VS how much time you spend sending the HTTP request?
If the REST service is slow there's nothing you can do in Camel. Depending on what the service does, you could try reducing the number of threads.
Try to disable streamCaching since it looks like you're not using it.
Then use a to instead of toD to invoke the service, I see that the URL is always the same. In the docs of ToD I read
By default the Simple language is used to compute the endpoint.
There may be a little overhead while parsing the URI string each time you invoke the route.

Spring Statemachine Forks

I have made good progress with the state machines upto now. My most recent problem arised when I wanted to use a fork, (I'm using UML). The fork didn't work as it is supossed to and I think its because of the persistance. I persist my machine in redis. refer below image.
This is my top level machine where Manage-commands is a Sub machine Reference And the top region is as it is.
Now say I persisted some state in redis, from the below region, and next an ONLINE event comes, then the machine does not accept the event, clearly because I have asked the machine to restore the state from redis with a given key.
bur I want both the regions to be persisted so that either one is selected according to the event.
Is there any way to achieve this?
Below is how I persist n restore
private void feedMachine(StateMachine<String, String> stateMachine, String user, GenericMessage<String> event)
throws Exception {
stateMachine.sendEvent(event);
System.out.println("persist machine --- > state :" + stateMachine.getState().toString());
redisStateMachinePersister.persist(stateMachine, "testprefixSw:" + user);
}
private StateMachine<String, String> resetStateMachineFromStore(StateMachine<String, String> stateMachine,
String user) throws Exception {
StateMachine<String, String> machine = redisStateMachinePersister.restore(stateMachine, "testprefixSw:" + user);
System.out.println("restore machine --- > state :" + machine.getState().toString());
return machine;
}
It's a bit weird as I found some other issues with persistence which I fixed in 1.2.x. Probably not related to your issues but I would have expected you to see similar errors. Anyway could you check RedisPersistTests.java and see if there's something different what you're doing. I didn't yet try sub-machine refs but I should not make any difference from persistence point of view.

Should I write Unit-Tests for CRUD operations when I have already Integration-Tests?

In our recent project Sonar was complaining about a weak test coverage. We noticed that it didn't consider integration tests by default. Beside the fact that you can configure Sonar, so it will consider them (JaCoCo plugin), we were discussing the question in our team if there really is the need to write Unit Tests, when you cover all your service and database layer with integration tests anyway.
What I mean with integration tests is, that all our tests run against a dedicated Oracle instance of the same type we use in production. We don't mock anything. If a service depends on another service, we use the real service. Data we need before running a test, we construct through some factory classes that use our Services/Repositories (DAOs).
So from my point of view - writing integration tests for simple CRUD operations especially when using frameworks like Spring Data/Hibernate is not a big effort. It is sometimes even easier, because you don't think of what and how to mock.
So why should I write Unit Tests for my CRUD operations that are less reliable as the Integration Tests I can write?
The only point I see is that integration tests will take more time to run, the bigger the project gets. So you don't want to run them all before check-in. But I am not so sure if this is so bad, if you have a CI environment with Jenkins/Hudson that will do the job.
So - any opinions or suggestions are highly appreciated!
If most of your services simply pass through to your daos, and your daos do little but invoke methods on Spring's HibernateTemplate or JdbcTemplate then you are correct that unit tests don't really prove anything that your integration tests already prove. However, having unit tests in place are valuable for all the usual reasons.
Since unit tests only test single classes, run in memory with no disk or network access, and never really test multiple classes working together, they normally go like this:
Service unit tests mock the daos.
Dao unit tests mock the database driver (or spring template) or use an embedded database (super easy in Spring 3).
To unit test the service that just passes through to the dao, you can mock like so:
#Before
public void setUp() {
service = new EventServiceImpl();
dao = mock(EventDao.class);
service.EventDao = dao;
}
#Test
public void creationDelegatesToDao() {
service.createEvent(sampleEvent);
verify(dao).createEvent(sampleEvent);
}
#Test(expected=EventExistsException.class)
public void creationPropagatesExistExceptions() {
doThrow(new EventExistsException()).when(dao).createEvent(sampleEvent);
service.createEvent(sampleEvent);
}
#Test
public void updatesDelegateToDao() {
service.updateEvent(sampleEvent);
verify(dao).updateEvent(sampleEvent);
}
#Test
public void findingDelgatesToDao() {
when(dao.findEventById(7)).thenReturn(sampleEvent);
assertThat(service.findEventById(7), equalTo(sampleEvent));
service.findEvents("Alice", 1, 5);
verify(dao).findEventsByName("Alice", 1, 5);
service.findEvents(null, 10, 50);
verify(dao).findAllEvents(10, 50);
}
#Test
public void deletionDelegatesToDao() {
service.deleteEvent(sampleEvent);
verify(dao).deleteEvent(sampleEvent);
}
But is this really a good idea? These Mockito assertions are asserting that a dao method got called, not that it did what was expected! You will get your coverage numbers but you are more or less binding your tests to an implementation of the dao. Ouch.
Now this example assumed the service had no real business logic. Normally the services will have business logic in addtion to dao calls, and you surely must test those.
Now, for unit testing daos, I like to use an embedded database.
private EmbeddedDatabase database;
private EventDaoJdbcImpl eventDao = new EventDaoJdbcImpl();
#Before
public void setUp() {
database = new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.H2)
.addScript("schema.sql")
.addScript("init.sql")
.build();
eventDao.jdbcTemplate = new JdbcTemplate(database);
}
#Test
public void creatingIncrementsSize() {
Event e = new Event(9, "Company Softball Game");
int initialCount = eventDao.findNumberOfEvents();
eventDao.createEvent(e);
assertThat(eventDao.findNumberOfEvents(), is(initialCount + 1));
}
#Test
public void deletingDecrementsSize() {
Event e = new Event(1, "Poker Night");
int initialCount = eventDao.findNumberOfEvents();
eventDao.deleteEvent(e);
assertThat(eventDao.findNumberOfEvents(), is(initialCount - 1));
}
#Test
public void createdEventCanBeFound() {
eventDao.createEvent(new Event(9, "Company Softball Game"));
Event e = eventDao.findEventById(9);
assertThat(e.getId(), is(9));
assertThat(e.getName(), is("Company Softball Game"));
}
#Test
public void updatesToCreatedEventCanBeRead() {
eventDao.createEvent(new Event(9, "Company Softball Game"));
Event e = eventDao.findEventById(9);
e.setName("Cricket Game");
eventDao.updateEvent(e);
e = eventDao.findEventById(9);
assertThat(e.getId(), is(9));
assertThat(e.getName(), is("Cricket Game"));
}
#Test(expected=EventExistsException.class)
public void creatingDuplicateEventThrowsException() {
eventDao.createEvent(new Event(1, "Id1WasAlreadyUsed"));
}
#Test(expected=NoSuchEventException.class)
public void updatingNonExistentEventThrowsException() {
eventDao.updateEvent(new Event(1000, "Unknown"));
}
#Test(expected=NoSuchEventException.class)
public void deletingNonExistentEventThrowsException() {
eventDao.deleteEvent(new Event(1000, "Unknown"));
}
#Test(expected=NoSuchEventException.class)
public void findingNonExistentEventThrowsException() {
eventDao.findEventById(1000);
}
#Test
public void countOfInitialDataSetIsAsExpected() {
assertThat(eventDao.findNumberOfEvents(), is(8));
}
I still call this a unit test even though most people might call it an integration test. The embedded database resides in memory, and it is brought up and taken down when the tests run. But this relies on the fact that the embedded database looks the same as the production database. Will that be the case? If not, then all that work was pretty useless. If so, then, as you say, these tests are doing anything different than the integration tests. But I can run them on demand with mvn test and I have the confidence to refactor.
Therefor, I write these unit tests anyway and meet my coverage targets. When I write integration tests, I assert that an HTTP request returns the expected HTTP response. Yeah it subsumes the unit tests, but hey, when you practice TDD you have those unit tests written before your actual dao implementation anyway.
If you write unit tests after your dao, then of course they are no fun to write. The TDD literature is full of warnings about how writing tests after your code feels like make work and no one wants to do it.
TL;DR: Your integration tests will subsume your unit tests and in that sense the unit tests are not adding real testing value. However when you have a high-coverage unit test suite you have the confidence to refactor. But of course if the dao is trivially calling Spring's data access template, then you might not be refactoring. But you never know. And finally, though, if the unit tests are written first in TDD style, you are going to have them anyway.
You only really need to unit test each layer in isolation if you plan to have the layers exposed to other components out of your project. For a web app, the only way the the repository layer can be invoked, is by the services layer, and the only way the service layer can be invoked is by the controller layer. So testing can start and end at the controller layer. For background tasks, these are invoked in the service layer, so need to be tested here.
Testing with a real database is pretty fast these days, so doesn't slow your tests down too much, if you design your setup/tear down well. However, if there are any other dependancies that could be slow, or problematic, then these should be mocked/stubbed.
This approach will give you:
good coverage
realistics tests
minimum amount of effort
minimum amount of refectoring effort
However, testing layers in isolation does allow your team to work more concurrently, so one dev can do repository and another can do service for one piece of functionality, and produce independently tested work.
There will always be double coverage when selenium/functional tests are incorporated as you can't rely on these alone as they are too slow to run. However, functional tests dont necessarily need to cover all the code, core functionality alone can be sufficient, aslong as the code has been covered by unit/integration tests.
I think there are two advantages of having finer grained(I will not use intentionaly the word unit test here) tests besides the high end integration tests.
1) Redundancy, having the layers covered in more than one place acts like a switch. If one set of tests (the integration test f.ex.) fail to locate the error the second layer may catch it. I will draw a comparison here with electric switches where redundancy is a must. You have a main switch and a specialized switch.
2)Lets suppose that you have a process calling external service. For one or another reason (Bug) the original exception gets consumer and an exception that does not carry information about the technical nature of the error reaches the integration test. The integration test will catch the error, but you will have no clue about what the error is or where is it coming from. Having a finer grained test in place increases the chance of pointing in the correct direction what and where excactly has failed.
I personally think that certain level of redundancy in testing is not a bad thing.
In your particular case if you write a CRUD test with in memory database you will have the chance to test your Hibernate mapping layer which can be quite complex if you are using things like Cascading , or fetching and so on...

Resources