Restore Spring State Machine from Database - spring

I have been following the data persist examples outlined in the spring docs
When I acquire a state machine it doesn't pull it from the database but checks in memory. I see records being written but I just can't restore them. Here is my configuration
I have defined the StateRepository and the
private final StateRepository<? extends RepositoryState> stateRepository;
private final TransitionRepository<? extends RepositoryTransition> transitionRepository;
private final StateMachineRuntimePersister<LoanEventStatus, LoanEventAction, String> stateMachineRuntimePersister;
Here are my beans based on the examples
#Bean
public StateMachineModelFactory<String, String> modelFactory() {
return new RepositoryStateMachineModelFactory(stateRepository, transitionRepository);
}
#Bean
public DefaultStateMachineService<LoanEventStatus, LoanEventAction> stateMachineService(
final StateMachineFactory<LoanEventStatus, LoanEventAction> stateMachineFactory,
final StateMachinePersist<LoanEventStatus, LoanEventAction, String> stateMachinePersist) {
return new DefaultStateMachineService<LoanEventStatus, LoanEventAction>(stateMachineFactory, stateMachinePersist);
}
#Bean
public StateMachinePersister<LoanEventStatus, LoanEventAction, String> persister(
StateMachinePersist<LoanEventStatus, LoanEventAction, String> defaultPersist) {
return new DefaultStateMachinePersister<>(defaultPersist);
}
I thought restoring them would be as simple as the following:
private final DefaultStateMachineService<LoanEventStatus, LoanEventAction> stateMachineService;
var stateMachine = stateMachineService.acquireStateMachine(id);
However, when I step into the DefaultStateMachineService I notice that the machines are empty. Shouldn't it restore when the application starts up or am i missing something

So I realized the problem was with this method.
stateMachineService.hasStateMachine(id)
Looking at the code it only looks for state machines that are in memory. To get the ones that are in the database you need to use JpaRepositoryStateMachine
StateMachineRepository<JpaRepositoryStateMachine> stateMachineRepository;
Then do the following to test if there is a state machine in the database and create one if there is not.
Optional<JpaRepositoryStateMachine> instance = stateMachineRepository.findById(Integer.toString(loanDetailId));
if (instance.isPresent()){
StateMachine<State, Event> stateMachine = stateMachineService.acquireStateMachine(id);

Related

How to mock findByPrincipalName in spring\mockito

I am trying to mock findByPrincipalName as in my test context I do not have redis set up but I am unable to do so, I get the following error:
The method thenReturn(Map<String,capture#2-of ?>) in the type OngoingStubbing<Map<String,capture#2-of ?>> is not applicable for the arguments (Map<String,capture#3-of ? extends Session>)
I do not really understand what this error is telling me, below is how I am attempting to mock the method:
Map<String, ? extends Session> sessions = new HashMap<>();
#MockBean
private FindByIndexNameSessionRepository<?> sessionRepository;
when(this.sessionRepository.findByPrincipalName(VALID_SUB)).thenReturn(sessions);
What do I need to do to be able to mock this method? The class RedisSession is not accessible so I cannot create an instance of this to use.
This is not a problem related to mocking, but simply a generic type mismatch. You defined the repository as FindByIndexNameSessionRepository<?>, while your sessions reference type is Map<String, ? extends Session>, so your repository returns ? (2), while you're trying to return an object containing ? extends Session (3). The numbering in the last sentence marks the bounds (?) accordingly to the log you've provided - bounds defined in different places are treated as different type definitions and do not match (read more here).
What you need to do is: define types for both the repository and the object it should return so that they match. One way of doing that would be simply sticking to the interface (Session) or if you wanted to make it more concrete, you could use a generic type definition on the class level (<T extends Session>) and apply it to the repository and the map.
#MockBean
private FindByIndexNameSessionRepository<Session> sessionRepository;
#Test
void test() {
Map<String, Session> sessions = new HashMap<>();
when(sessionRepository.findByPrincipalName(VALID_SUB))
.thenReturn(sessions);
...
}
class TypedIndexNameSessionTest<T extends Session> {
#MockBean
private FindByIndexNameSessionRepository<T> sessionRepository;
#Test
void emptySessions() {
Map<String, T> sessions = new HashMap<>();
when(sessionRepository.findByPrincipalName(VALID_SUB))
.thenReturn(sessions);
...
}
}
I've tested the code locally and pushed it to my GitHub repository - you can see the full example there (all tests pass).

Repository vs. DAO (again)

In general this back-story does not matter but just to explain the code below:
The server handles users and user groups. User groups are able to "discover" places - at this point in time these places are coming exclusively from the Google Places API.
Current Implementation
Currently, I have a lot of JpaRepository objects, which I call Repository, in my Service Layer. I am stressing "Repository" because in my proposed solution below, they'd be downgraded to DAOs.
However, what I do not like in my current code, and also the reason for my question here, is the amount of repositories one can find in the UserGroupService.
#Service
public class UserGroupService {
private final static Logger LOGGER = LogManager.getLogger(UserGroupService.class);
#Autowired
private UserGroupRepository userGroupRepository;
#Autowired
private UserGroupPlaceRepository userGroupPlaceRepository;
#Autowired
private PlaceRepository placeRepository;
#Autowired
private GooglePlaceRepository googlePlaceRepository;
#Autowired
private GooglePlaces googlePlaces;
public UserGroupService() {
}
#Transactional
public void discoverPlaces(Long groupId) {
final UserGroup userGroup = this.userGroupRepository.findById(groupId).orElse(null);
if (userGroup == null) {
throw new EntityNotFoundException(String.format("User group with id %s not found.", groupId));
}
List<PlacesSearchResult> allPlaces = this.googlePlaces.findPlaces(
userGroup.getLatitude(),
userGroup.getLongitude(),
userGroup.getSearchRadius());
allPlaces.forEach(googlePlaceResult -> {
GooglePlace googlePlace = this.googlePlaceRepository.findByGooglePlaceId(googlePlaceResult.placeId);
if (googlePlace != null) {
return;
}
Place place = new Place();
place.setLatitude(googlePlaceResult.geometry.location.lat);
place.setLongitude(googlePlaceResult.geometry.location.lng);
place.setPlaceType(Place.PlaceType.GOOGLE_PLACE);
place.setName(googlePlaceResult.name);
place.setVicinity(googlePlaceResult.vicinity);
place = this.placeRepository.save(place);
UserGroupPlace.UserGroupPlaceId userGroupPlaceId = new UserGroupPlace.UserGroupPlaceId();
userGroupPlaceId.setUserGroup(userGroup);
userGroupPlaceId.setPlace(place);
UserGroupPlace userGroupPlace = new UserGroupPlace();
userGroupPlace.setUserGroupPlaceId(userGroupPlaceId);
this.userGroupPlaceRepository.save(userGroupPlace);
googlePlace = new GooglePlace();
googlePlace.setPlace(place);
googlePlace.setGooglePlaceId(googlePlaceResult.placeId);
this.googlePlaceRepository.save(googlePlace);
});
}
}
A Solution That Does Not Work
What could make this code a lot simpler and had the potential to resolve this mess up there, would be #Inheritance:
#Entity
#Table(name = "place")
#Inheritance(strategy InheritanceType.JOINED)
public class Place { /* .. */ }
#Entity
#Table(name = "google_place")
public class GooglePlace extends Place { /* .. */ }
However, this is not an option because then I cannot have a PlaceRepository which saves just a place. Hibernate does not seem to like it..
My proposal
I think my confusion starts with the names that Spring is using. E.g. JpaRepository - I am not so sure if this is actually "the right" name. Because as far as I understood, these objects actually work like data access objects (DAOs). I think it should actually look something like this:
public interface PlaceDao extends JpaRepository<Place, Long> {
}
public interface GooglePlaceDao extends JpaRepository<Place, Long> {
}
#Repository
public class GooglePlaceRepository {
#Autowired
private PlaceDao placeDao;
#Autowired
private GooglePlaceDao googlePlaceDao;
public List<GooglePlace> findByGroupId(Long groupId) {
// ..
}
public void save(GooglePlace googlePlace) {
// ..
}
public void saveAll(List<GooglePlace> googlePlaces) {
// ..
}
}
#Service
public class UserGroupService {
#Autowired
private GooglePlaceRepository googlePlaceRepository;
#Autowired
private UserGroupRepository userGroupRepository;
#Transactional
public void discoverPlaces(Long groupId) {
final UserGroup userGroup = this.userGroupRepository.findById(groupId).orElse(null)
.orElseThrow(throw new EntityNotFoundException(String.format("User group with id %s not found.", groupId)));
List<PlacesSearchResult> fetched = this.googlePlaces.findPlaces(
userGroup.getLatitude(),
userGroup.getLongitude(),
userGroup.getSearchRadius());
// Either do the mapping here or let GooglePlaces return
// List<GooglePlace> instead of List<PlacesSearchResult>
List<GooglePlace> places = fetched.stream().map(googlePlaceResult -> {
GooglePlace googlePlace = this.googlePlaceRepository.findByGooglePlaceId(googlePlaceResult.placeId);
if (googlePlace != null) {
return googlePlace;
}
Place place = new Place();
place.setLatitude(googlePlaceResult.geometry.location.lat);
place.setLongitude(googlePlaceResult.geometry.location.lng);
place.setPlaceType(Place.PlaceType.GOOGLE_PLACE);
place.setName(googlePlaceResult.name);
place.setVicinity(googlePlaceResult.vicinity);
googlePlace = new GooglePlace();
googlePlace.setPlace(place);
googlePlace.setGooglePlaceId(googlePlaceResult.placeId);
return googlePlace;
}).collect(Collectors.toList());
this.googlePlaceRepository.saveAll(places);
// Add places to group..
}
}
Summary
I would like to know what I don't see. Am I fighting the framework, or does my data model not make sense and this is why I find myself struggling with this? Or am I still having issues on how the two patterns "Repository" and "DAO" are supposed to be used?
How would one implement this?
I would say you are correct that there are too many repository dependencies in your service. Personally, I try to keep the number of #Autowired dependencies to a minimum and I try to use a repository only in one service and expose its higher level functionality via that service. At our company we call that data sovereignty (in German: Datenhoheit) and its purpose is to ensure that there is only one place in the application where those entities are modified.
From what I understand from your code I would introduce a PlacesService which has all the Dependencies to the PlaceRepository, GooglePlaceRepository and GooglePlaces. If you feel like Service is not the right name you could also call it the PlacesDao, mark it with a Spring #Component annotation and inject all the Repositories, which are by definition collections of things
#Component
public class PlacesDao {
#Autowired
private PlaceRepository placeRepository;
#Autowired
private GooglePlaceRepository googlePlaceRepository;
This service/DAO could offer an API findPlacesForGroup(userGroup) and createNewPlace(...) and thus making your for Loop smaller and more elegant.
On a side note: you can merge your first four lines into just one. Java Optionals support a orElseThrow() method:
UserGroup userGroup = userGroupRepository.findById(groupId).orElseThrow(() ->
new EntityNotFoundException(String.format("User group with id %s not found.", groupId));
I think the foreach does not look like a good approach to me. You're doing way to much for just a single responsibility of a function. I would refactor this to a standart for loop.
Place place = new Place();
place.setLatitude(googlePlaceResult.geometry.location.lat);
place.setLongitude(googlePlaceResult.geometry.location.lng);
place.setPlaceType(Place.PlaceType.GOOGLE_PLACE);
place.setName(googlePlaceResult.name);
place.setVicinity(googlePlaceResult.vicinity);
place = this.placeRepository.save(place);
This part can easily be a method in a service.
UserGroupPlace.UserGroupPlaceId userGroupPlaceId = new
UserGroupPlace.UserGroupPlaceId();
userGroupPlaceId.setUserGroup(userGroup);
userGroupPlaceId.setPlace(place);
UserGroupPlace userGroupPlace = new UserGroupPlace();
userGroupPlace.setUserGroupPlaceId(userGroupPlaceId);
this.userGroupPlaceRepository.save(userGroupPlace);
That part as well.
googlePlace = new GooglePlace();
googlePlace.setPlace(place);
googlePlace.setGooglePlaceId(googlePlaceResult.placeId);
this.googlePlaceRepository.save(googlePlace);
And this part: I don't understand why your doing this. You could just update the googlePlace instance you loaded from the repo. Hibernate/Transactions are doing the rest for you.

OData (Olingo) "inhibit" endpoint

My question is about what is best way to inhibit an endpoint that is automatically provided by Olingo?
I am playing with a simple app based on Spring boot and using Apache Olingo.On short, this is my servlet registration:
#Configuration
public class CxfServletUtil{
#Bean
public ServletRegistrationBean getODataServletRegistrationBean() {
ServletRegistrationBean odataServletRegistrationBean = new ServletRegistrationBean(new CXFNonSpringJaxrsServlet(), "/user.svc/*");
Map<String, String> initParameters = new HashMap<String, String>();
initParameters.put("javax.ws.rs.Application", "org.apache.olingo.odata2.core.rest.app.ODataApplication");
initParameters.put("org.apache.olingo.odata2.service.factory", "com.olingotest.core.CustomODataJPAServiceFactory");
odataServletRegistrationBean.setInitParameters(initParameters);
return odataServletRegistrationBean;
} ...
where my ODataJPAServiceFactory is
#Component
public class CustomODataJPAServiceFactory extends ODataJPAServiceFactory implements ApplicationContextAware {
private static ApplicationContext context;
private static final String PERSISTENCE_UNIT_NAME = "myPersistenceUnit";
private static final String ENTITY_MANAGER_FACTORY_ID = "entityManagerFactory";
#Override
public ODataJPAContext initializeODataJPAContext()
throws ODataJPARuntimeException {
ODataJPAContext oDataJPAContext = this.getODataJPAContext();
try {
EntityManagerFactory emf = (EntityManagerFactory) context.getBean(ENTITY_MANAGER_FACTORY_ID);
oDataJPAContext.setEntityManagerFactory(emf);
oDataJPAContext.setPersistenceUnitName(PERSISTENCE_UNIT_NAME);
return oDataJPAContext;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
...
My entity is quite simple ...
#Entity
public class User {
#Id
private String id;
#Basic
private String firstName;
#Basic
private String lastName;
....
Olingo is doing its job perfectly and it helps me with the generation of all the endpoints around CRUD operations for my entity.
My question is : how can I "inhibit" some of them? Let's say for example that I don't want to enable the delete my entity.
I could try to use a Filter - but this seems a bit harsh. Are there any other, better ways to solve my problem?
Thanks for the help.
As you have said, you could use a filter, but then you are really coupled with the URI schema used by Olingo. Also, things will become complicated when you have multiple, related entity sets (because you could navigate from one to the other, making the URIs more complex).
There are two things that you can do, depending on what you want to achieve:
If you want to have a fined grained control on what operations are allowed or not, you can create a wrapper for the ODataSingleProcesor and throw ODataExceptions where you want to disallow an operation. You can either always throw exceptions (i.e. completely disabling an operation type) or you can use the URI info parameters to obtain the target entity set and decide if you should throw an exception or call the standard single processor. I have used this approach to create a read-only OData service here (basically, I just created a ODAtaSingleProcessor which delegates some calls to the standard one + overridden a method in the service factory to wrap the standard single processor in my wrapper).
If you want to completely un-expose / ignore a given entity or some properties, then you can use a JPA-EDM mapping model end exclude the desired components. You can find an example of such a mapping here: github. The mapping model is just an XML file which maps the JPA entities / properties to EDM entity type / properties. In order for olingo to pick it up, you can pass the name of the file to the setJPAEdmMappingModel method of the ODataJPAContext in your initialize method.

Why does Spring Data MongoDB not expose events for update…(…) methods?

It appears that the update for mongoOperations do not trigger the events in AbstractMongoEventListener.
This post indicates that was at least the case in Nov 2014
Is there currently any way to listen to update events like below? This seems to be quite a big omission if it is the case.
MongoTemplate.updateMulti()
Thanks!
This is no oversight. Events are designed around the lifecycle of a domain object or a document at least, which means they usually contain an instance of the domain object you're interested in.
Updates on the other hand are completely handled in the database. So there are no documents or even domain objects handled in MongoTemplate. Consider this basically the same way JPA #EntityListeners are only triggered for entities that are loaded into the persistence context in the first place, but not triggered when a query is executed as the execution of the query is happening in the database.
I know it's too late to answer this Question, I have the same situation with MongoTemplate.findAndModify method and the reason I needed events is for Auditing purpose. here is what i did.
1.EventPublisher (which is ofc MongoTemplate's methods)
public class CustomMongoTemplate extends MongoTemplate {
private ApplicationEventPublisher applicationEventPublisher;
#Autowired
public void setApplicationEventPublisher(ApplicationEventPublisher
applicationEventPublisher) {
this.applicationEventPublisher = applicationEventPublisher;
}
//Default Constructor here
#Override
public <T> T findAndModify(Query query, Update update, Class<T> entityClass) {
T result = super.findAndModify(query, update, entityClass);
//Publishing Custom Event on findAndModify
if(result!=null && result instanceof Parent)//All of my Domain class extends Parent
this.applicationEventPublisher.publishEvent(new AfterFindAndModify
(this,((Parent)result).getId(),
result.getClass().toString())
);
return result;
} }
2.Application Event
public class AfterFindAndModify extends ApplicationEvent {
private DocumentAuditLog documentAuditLog;
public AfterFindAndModify(Object source, String documentId,
String documentObject) {
super(source);
this.documentAuditLog = new DocumentAuditLog(documentId,
documentObject,new Date(),"UPDATE");
}
public DocumentAuditLog getDocumentAuditLog() {
return documentAuditLog;
}
}
3.Application Listener
public class FindandUpdateMongoEventListner implements ApplicationListener<AfterFindAndModify> {
#Autowired
MongoOperations mongoOperations;
#Override
public void onApplicationEvent(AfterFindAndModify event) {
mongoOperations.save(event.getDocumentAuditLog());
}
}
and then
#Configuration
#EnableMongoRepositories(basePackages = "my.pkg")
#ComponentScan(basePackages = {"my.pkg"})
public class MongoConfig extends AbstractMongoConfiguration {
//.....
#Bean
public FindandUpdateMongoEventListner findandUpdateMongoEventListner(){
return new FindandUpdateMongoEventListner();
}
}
You can listen to database changes, even the changes completely outside your program (MongoDB 4.2 and newer).
(code is in kotlin language. same for java)
#Autowired private lateinit var op: MongoTemplate
#PostConstruct
fun listenOnExternalChanges() {
Thread {
op.getCollection("Item").watch().onEach {
if(it.updateDescription.updatedFields.containsKey("name")) {
println("name changed on a document: ${it.updateDescription.updatedFields["name"]}")
}
}
}.start()
}
This code only works when replication is enabled. You can enable it even when you have a single node:
Add the following replica set details to mongodb.conf (/etc/mongodb.conf or /usr/local/etc/mongod.conf or C:\Program Files\MongoDB\Server\4.0\bin\mongod.cfg) file
replication:
replSetName: "local"
Restart mongo service, Then open mongo console and run this command:
rs.initiate()

Spring Boot equivalent to XML multi-database configuration

I would like to port two projects to Spring Boot 1.1.6. The are each part of a larger project. They both need to make SQL connections to 1 of 7 production databases per web request based region. One of them persists configuration setting to a Mongo database. They are both functional at the moment but the SQL configuration is XML based and the Mongo is application.properties based. I'd like to move to either xml or annotation before release to simplify maintenance.
This is my first try at this forum, I may need some guidance in that arena as well. I put the multi-database tag on there. Most of those deal with two connections open at a time. Only one here and only the URL changes. Schema and the rest are the same.
In XML Fashion ...
#Controller
public class CommonController {
private CommonService CommonService_i;
#RequestMapping(value = "/rest/Practice/{enterprise_id}", method = RequestMethod.GET)
public #ResponseBody List<Map<String, Object>> getPracticeList(#PathVariable("enterprise_id") String enterprise_id){
CommonService_i = new CommonService(enterprise_id);
return CommonService_i.getPracticeList();
}
#Service
public class CommonService {
private ApplicationContext ctx = null;
private JdbcTemplate template = null;
private DataSource datasource = null;
private SimpleJdbcCall jdbcCall = null;
public CommonService(String enterprise_id) {
ctx = new ClassPathXmlApplicationContext("database-beans.xml");
datasource = ctx.getBean(enterprise_id, DataSource.class);
template = new JdbcTemplate(datasource);
}
Each time a request is made, a new instance of the required service is created with the appropriate database connection.
In the spring boot world, I've come across one article that extended TomcatDataSourceConfiguration.
http://xantorohara.blogspot.com/2013/11/spring-boot-jdbc-with-multiple.html That at least allowed me to create a java configuration class however, I cannot come up with a way to change the prefix for the ConfigurationProperties per request like I am doing with the XML above. I can set up multiple configuration classes but the #Qualifier("00002") in the DAO has to be a static value. //The value for annotation attribute Qualifier.value must be a constant expression
#Configuration
#ConfigurationProperties(prefix = "Region1")
public class DbConfigR1 extends TomcatDataSourceConfiguration {
#Bean(name = "dsRegion1")
public DataSource dataSource() {
return super.dataSource();
}
#Bean(name = "00001")
public JdbcTemplate jdbcTemplate(DataSource dsRegion1) {
return new JdbcTemplate(dsRegion1);
}
}
On the Mongo side, I am able to define variables in the configurationProperties class and, if there is a matching entry in the appropriate application.properties file, it overwrites it with the value in the file. If not, it uses the value in the code. That does not work for the JDBC side. If you define a variable in your config classes, that value is what is used. (yeah.. I know it says mondoUrl)
#ConfigurationProperties(prefix = "spring.mongo")
public class MongoConnectionProperties {
private String mondoURL = "localhost";
public String getMondoURL() {
return mondoURL;
}
public void setMondoURL(String mondoURL) {
this.mondoURL = mondoURL;
}
There was a question anwsered today that got me a little closer. Spring Boot application.properties value not populating The answer showed me how to at least get #Value to function. With that, I can set up a dbConfigProperties class that grabs the #Value. The only issue is that the value grabbed by #Value is only available in when the program first starts. I'm not certain how to use that other than seeing it in the console log when the program starts. What I do know now is that, at some point, in the #Autowired of the dbConfigProperties class, it does return the appropriate value. By the time I want to use it though, it is returning ${spring.datasource.url} instead of the value.
Ok... someone please tell me that #Value is not my only choice. I put the following code in my controller. I'm able to reliably retrieve one value, Yay. I suppose I could hard code each possible property name from my properties file in an argument for this function and populate a class. I'm clearly doing something wrong.
private String url;
//private String propname = "${spring.datasource.url}"; //can't use this
#Value("${spring.datasource.url}")
public void setUrl( String val) {
this.url = val;
System.out.println("==== value ==== " + url);
}
This was awesome... finally some progress. I believe I am giving up on changing ConfigurationProperties and using #Value for that matter. With this guy's answer, I can access the beans created at startup. Y'all were probably wondering why I didn't in the first place... still learning. I'm bumping him up. That saved my bacon. https://stackoverflow.com/a/24595685/4028704
The plan now is to create a JdbcTemplate producing bean for each of the regions like this:
#Configuration
#ConfigurationProperties(prefix = "Region1")
public class DbConfigR1 extends TomcatDataSourceConfiguration {
#Bean(name = "dsRegion1")
public DataSource dataSource() {
return super.dataSource();
}
#Bean(name = "00001")
public JdbcTemplate jdbcTemplate(DataSource dsRegion1) {
return new JdbcTemplate(dsRegion1);
}
}
When I call my service, I'll use something like this:
public AccessBeans(ServletRequest request, String enterprise_id) {
ctx = RequestContextUtils.getWebApplicationContext(request);
template = ctx.getBean(enterprise_id, JdbcTemplate.class);
}
Still open to better ways or insight into foreseeable issues, etc but this way seems to be about equivalent to my current XML based ways. Thoughts?

Resources