We are considering using Spring State Machine for the following use case:
One of our entities (i.e. a JPA entity from our domain model) can be in one of a number of states and we have millions such entities (and as many lines in our database).
We are considering using:
org.springframework.statemachine.data.jpa.JpaStateRepository
Should we annotate our domain model classes with JpaRepositoryState and thereby create a dependency between our domain model and spring state machine?
What would be an alternative to the above i.e. ensure that our JPA entity class is not too tightly coupled to JpaRepositoryState?
What is the mapping/relationship between the state machine's machineId and the JPA entity's #Id?
JpaRepositoryState really doesn't have anything to do with your domain model as it is our entity class to store machine configuration in an external repository. Specifically it is a state representation and similarly there are entity classes for transitions, actions and guards.
There's no relationship between #id and machineId. #id is just a field identifying row in a database, which is autogenerated if you store entities manually via spring-data. Fields machineId and submachineId are however used together so that you're able to define multiple machines in a repository and then make a substate to reference a machine similarly how in UML you can define a plain state and then define that to be a reference to a submachine.
It seems that I'm getting more and more questions related user's entity classes and how to handle those with a statemachine, like in gh453. I don't really have an answer to this right now as Spring Statemachine was never designed to handle these specific use cases. This doesn't mean that Spring Statemachine would never handle these scenarious, we just don't have anything out of the box right now.
Also our documentation is lacking these topics which is a clear indication that we need to be better on that area.
Related
In Spring MVC we have 3 main categories of objects: Controllers, Services and Repositories.
I'm not able to "categorize" the objects returned by these three categories.
For example, the repositories return Entitys, but how could I name the objects returned by services and controllers?
In a real project I'm developing I have a repository returns an extraction from a table, so I get Entities objects. Into the service, where the logic is, I need only to return some fields, so I need to map the entities to another object-model. Later into the controller maybe I will need some layer specific presentation, for example between "standard-computer" and mobile, so I need another type of object to map the result of the service.
Each layer has its postfix in the name of the class to keep the codebase clean and readable. In most of the projects I have worked on, the naming convention is:
Controller layer
The POJO exposed outside of the application, for example, thought a REST API is DTO (Data Transfer Object), so it is usually under the dto package, and the name is like UserDto
Service layer
The POJO that handles the application's business logic is called domain object, so it is usually under the domain package, and the name is like User without any postfix in the name.
Repository layer
The POJO that holds the data in the persistence layer is called entity, so it is usually under the jpa package, and the name is like UserJpa
I have a controller with a method like
#PostMapping(value="/{reader}")
public String addToReadingList(#PathVariable("reader") String reader, Book book) {
book.setReader(reader);
readingListRepository.save(book);
return "redirect:/readingList/{reader}";
}
When I run a static code analysis with Sonarqube I get a vulnerability report stating that
Replace this persistent entity with a simple POJO or DTO object
But if I use a DTO (which has exactly the same fields as the entity class, then I get another error:
1 duplicated blocks of code must be removed
What should be the right solution?
Thanks in advance.
Enric
You should build a new separate class which represents your Entity ("Book" ) as Plain Old Java Object (POJO) or Data Transfer Object (DTO). If you use JSF or other stateful technology this rule is important. If your entity is stateful there might be open JPA sessions etc. which may modify your database (e.g. if you call a setter in JSF on a stateful bean).
For my projects I ignore this Sonar rule because of two reasons:
I alway you REST and REST will map my Java Class into JSON which can be seen as a DTO.
REST is stateless (no server session) so no database transaction will be open after the transformation to JSON
Information obtained from sonarsource official documentation.
On one side, Spring MVC automatically bind request parameters to beans
declared as arguments of methods annotated with #RequestMapping.
Because of this automatic binding feature, it’s possible to feed some
unexpected fields on the arguments of the #RequestMapping annotated
methods.
On the other end, persistent objects (#Entity or #Document) are linked
to the underlying database and updated automatically by a persistence
framework, such as Hibernate, JPA or Spring Data MongoDB.
These two facts combined together can lead to malicious attack: if a
persistent object is used as an argument of a method annotated with
#RequestMapping, it’s possible from a specially crafted user input, to
change the content of unexpected fields into the database.
For this reason, using #Entity or #Document objects as arguments of
methods annotated with #RequestMapping should be avoided.
In addition to #RequestMapping, this rule also considers the
annotations introduced in Spring Framework 4.3: #GetMapping,
#PostMapping, #PutMapping, #DeleteMapping, #PatchMapping.
See More Here
I'm working through a spring mvc video series and loving it!
https://www.youtube.com/channel/UCcawgWKCyddtpu9PP_Fz-tA/videos
I'd like to learn more about the specifics of the exact architecture being used and am having trouble identifying the proper name - so that I can read further.
For example, I understand that the presentation layer is MVC, but not really sure how you would more specifically describe the pattern to account for the use of service and resource objects - as opposed to choosing to use service, DAO and Domain objects.
Any clues to help me better focus my search on understanding the layout below?
application
core
models/entities
services
rest
controllers
resources
resource_assemblers
Edit:
Nathan Hughes comment clarified my confusion with the nomenclature and SirKometa connected the architectural dots that I was not grasping. Thanks guys.
As far as I can tell the layout you have mentioned represents the application which communicates with the world through REST services.
core package represents all the classes (domain, services, repositories) which are not related to view.
model package - Assuming you are aiming for the typical application you do have a model/domain/entity package which represents your data For example: https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/models/entities/Account.java.
repository package - Since you are using Spring you will most likely use also since spring-data or even spring-data-jpa with Hibernate as your ORM Library. It will most likely lead you to use Repository interfaces (author of videos you watch for some reason decided not to use it though). Anyway it will be your layer to access database, for example: https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/repositories/jpa/JpaAccountRepo.java
service package will be your package to manipulate data. It's not the best example but this layer doesn't access your database directly, it will use Repositories to do it, but it might also do other things - it will be your API to manipulate data in you application. Let's say you want to have a fancy calculation on your wallet before you save it to DB, or like here https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/services/impl/AccountServiceImpl.java you want to make sure that the Blog you try to create doesn't exist yet.
controllers package contain all classes which will be used by DispacherServlet to take care of the requests. You will read "input" from the request, process it (use your Services here) and send your responses.
resource_assemblers package in this case is framework specific (Hateoas). As far as I can tell it's just a DTO for your json responses (for example you might want to store password in your Account but exposing it through json won't be a good idea, and it would happen if you didn't use DTO).
Please let me know if that is the answer you were looking for.
This question may be of interest to you as well as this explanation.
You are mostly talking about the same things in each case, Spring just uses annotations so that when it scans them it knows what type of object you are creating or instantiating.
Basically everything request flows through the controller annotated with #Controller. Each method process the request and (if needed) calls a specific service class to process the business logic. These classes are annotated with #Service. The controller can instantiate these classes by autowiring them in #Autowire or resourcing them #Resource.
#Controller
#RequestMapping("/")
public class MyController {
#Resource private MyServiceLayer myServiceLayer;
#RequestMapping("/retrieveMain")
public String retrieveMain() {
String listOfSomething = myServiceLayer.getListOfSomethings();
return listOfSomething;
}
}
The service classes then perform their business logic and if needed, retrieve data from a repository class annotated with #Repository. The service layer instantiate these classes the same way, either by autowiring them in #Autowire or resourcing them #Resource.
#Service
public class MyServiceLayer implements MyServiceLayerService {
#Resource private MyDaoLayer myDaoLayer;
public String getListOfSomethings() {
List<String> listOfSomething = myDaoLayer.getListOfSomethings();
// Business Logic
return listOfSomething;
}
}
The repository classes make up the DAO, Spring uses the #Repository annotation on them. The entities are the individual class objects that are received by the #Repository layer.
#Repository
public class MyDaoLayer implements MyDaoLayerInterface {
#Resource private JdbcTemplate jdbcTemplate;
public List<String> getListOfSomethings() {
// retrieve list from database, process with row mapper, object mapper, etc.
return listOfSomething;
}
}
#Repository, #Service, and #Controller are specific instances of #Component. All of these layers could be annotated with #Component, it's just better to call it what it actually is.
So to answer your question, they mean the same thing, they are just annotated to let Spring know what type of object it is instantiating and/or how to include another class.
I guess the architectural pattern you are looking for is Representational State Transfer (REST). You can read up on it here:
http://en.wikipedia.org/wiki/Representational_state_transfer
Within REST the data passed around is referred to as resources:
Identification of resources:
Individual resources are identified in requests, for example using URIs in web-based REST systems. The resources themselves are conceptually separate from the representations that are returned to the client. For example, the server may send data from its database as HTML, XML or JSON, none of which are the server's internal representation, and it is the same one resource regardless.
I'm new to using Spring with Neo4j and I have a question about #Autowire for a GraphRepository.
Most examples I've seen use one #Autowire per Controller, but I have two Nodes I need to modify at the same time when a particular method is called in the controller. Should I simply #Autowire the repositories for both nodes (eg per the code below)? Is there any impact if I do this in a second controller with the same repositories as well (so if I had a ChatSessionController which also #Autowired ChatMessageService and ChatSessionService)?
ChatMessageController.java
#Controller
public class ChatMessageController {
#Autowired
private ChatMessageService chatMessageService;
#Autowired
private ChatSessionService chatSessionService;
#RequestMapping(value = "/message/add/{chatSessionId}", method = RequestMethod.POST)
#ResponseBody
#Transactional
public void addMessage(#RequestBody ChatMessagePack chatMessagePack,
#PathVariable("chatSessionId") Long chatSessionId) {
ChatMessage chatMessage = new ChatMessage(chatMessagePack);
chatMessageService.save(chatMessage);
// TODO: Make some modifications to the ChatSession as well
}
}
Any help would be much appreciated! I've been googling and looking through Stackoverflow to understand this better but I haven't found anything yet. Any pointers in the right directions would be great.
Another underlying question is, should I be (and can I?) modifying other Nodes in a GraphRepository that handles a particular node? Eg Should my GraphRepository be able to modify my GraphRespository?
Thanks!
I'm not convinced that this is a SO question, it's not really a Neo4J or Spring question either, it is more about the architecture of your application. However assuming that you understand the negatives of class fan out, and how to use the #Transactional annotation to achieve what you want then the answer to your question is that it is just fine to have many Repositories (Neo4J or otherwise, autowired or otherwise) in your class and in as many classes as you want.
Neo4J transactions default to Isolation level READ_COMMITTED and if you need anything else, you need to add the guards/locks yourself. Nested transactions are consideredd tobe the same transaction. The Spring #Transactional annotation relies on proxies that you should be aware of as they have implications when calling methods from within the same class.
I would go through this tuotorial over at Spring Data and get your head around how real world vs domain vs node models differ, there will be cases where one repository impacts another node type but I would think it is often transparent to you (i.e adding relationships). You can do what you like in each repository (the generic nature of them is largely confined to all of the built in CRUD and queries derived from finder-method names (see documentation ) using the #Query annotation, and some queries have side effects, but largely you should avoid it.
As you start adding multiple repositories to multiple controllers I think that your code will begin to smell bad and that you should consider encapsulating this business logic off on its own somewhere, neatly unit tested. I also wouldn't tie myself to one controller per data object, it would be fine to have a single ChatController with a POST/chat/ to create a new session and POST /chat/{sessionId} to add a message. Intersting questions on Programmers:
How accurate is "Business logic should be in a service, not in a model?"
Best Practices for MVC Architecture
MVC Architecture — How many Controllers do I need?
I would like to know, whether this is a valid practice to use "new" in spring to create a Object?
You can either use xml->bean to create a object using xml file or use annotations to create a object.
This question arises when I am working on a project and where I want to create a object of a property class(which contains properties and setter/getter method of those properties).
I am able to create a object using new and its working fine but if spring has capability to create and manage object lifecycle then which way I need to go create a object and why?
I think the confusion may arise because of the (over)usage of spring as DI mechanism. Spring is a framework providing many services. Bean or dependency injection is just on of those.
I would say that for POJOs which have just setter and getters without much logic in them you can safely create objects using new keyword. For example, in case of value objects and data classes which do not have much configuration or life cycle events to worry about, go ahead and crate those using new keyword. If you repetitively create these objects and have fields which are not changing often, then I would use spring because it will lessen some of the repetitive code and object creation can be considered externalized or separated from your object usage.
Classes instantiated using spring bean definition xml/annotations are basically 'Spring-Managed' beans which mostly means that their life cycle, scope, etc are managed by spring. Spring manages objects which are beans, which may have some life cycle methods and APIs. These beans are dependencies for the classes in which the are set. The parent objects call some API of these dependencies to fulfil some business cases.
Hope this helps.
The Dependency Injection concept in spring is more useful when we need to construct an object that depends upon many objects, because it saves you time and effort for constructing as well as instantiating dependent objects.
In your case , Since it's a POJO class with only setters and getters , I think it is absolutely safe to instantiate it using a new keyword.