Considering the complexities involved in JPA we are planning to use Spring Data JDBC for our entities for its simplicity. Below is the sample structure and we have up to 6 child entities. We are able to successfully insert the data into various of these entities with proper foreign key mappings.
Challenge:- We have a workflow process outside of this application that periodically updates the "requestStatus" in the "Request" entity and this is the only field that gets updated after the Request is created. As with spring data JDBC, during the update it deletes all referenced entities and recreates(inserts) it again. This is kind of a heavy operation considering 6 child entities. Are there any workaround or suggestion in terms of how to handle these scenarios
#Table("Request")
public class Request {
private String requestId; // generated in the Before Save Listener .
private String requestStatus;
#Column("requestId")
private ChildEntity1 childEntity1;
public void addChildEntity1(ChildEntity1 childEntityobj) {
this.childEntity1 = childEntityobj;
}
}
#Table("Child_Entity1")
public class ChildEntity1 {
private String entity1Id; // Auto increment on DB
private String name;
private String SSN;
private String requestId;
#MappedCollection(column = "entity1Id", keyColumn = "entity2Id")
private ArrayList<ChildEntity2> childEntity2List = new ArrayList<ChildEntity2>();
#MappedCollection(column = "entity1Id", keyColumn = "entity3Id")
private ArrayList<ChildEntity3> childEntity3List = new ArrayList<ChildEntity3>();
public void addChildEntity2(ChildEntity2 childEntity2obj) {
childEntity2List.add(childEntity2obj);
}
public void addChildEntity3(ChildEntity3 childEntity3obj) {
childEntity3List.add(childEntity3obj);
}
}
#Table("Child_Entity2")
public class ChildEntity2 {
private String entity2Id; // Auto increment on DB
private String partyTypeCode;
private String requestId;
}
#Table(Child_Entity3)
public class ChildEntity3 {
private String entity3Id; // Auto increment on DB
private String PhoneCode;
private String requestId;
}
#Test
public void createandsaveRequest() {
Request newRequest = createRequest(); // using builder to build the object
newRequest.addChildEntity1(createChildEntity1());
newRequest.getChildEntity1().addChildEntity2(createChildEntity2());
newRequest.getChildEntity1().addChildEntity3(createChildEntity3());
requestRepository.save(newRequest);
}
The approach you describe in your comment:
Have a dedicated method performing exactly that update-statement is the right way to do this.
You should be aware though that this does ignore optimistic locking.
So there is a risk that the following might happen
Thread/Session 1: reads an aggregate.
Thread/Session 2: updates a single field as per your question.
Thread/Session 1: writes the aggregate, possibly with other changes, overwriting the change made by Session 2.
To avoid this or similar problems you need to
check that the version of the aggregate root is unchanged from when you loaded it, in order to guarantee that the method doesn't write conflicting changes.
increment the version in order to guarantee that nothing else overwrites the changes made in this method.
This might mean that you need two or more SQL statements which probably means you have to fallback even more to a full custom method where you implement this, probably using an injected JdbcTemplate.
Related
I'm new to JOOQ and Spring caching and using version 3.10.6. I'm trying to cache a query result so I don't need to go to database every time. The fetching of the query goes smoothly there is no problem in that but when you execute this query again, it goes to the cache which has duplicate records in the inner lists. Also every time this query is called and it falls to the cache, the duplications grow in number. Now I can put a Set instead of a List but I want to know why this duplication occurs.
Here is my JooqRepo method
#Cacheable(CachingConfig.OPERATORS)
public List<MyDto> getAllOperatorsWithAliases() {
return create.select(Tables.MY_TABLE.ID)
.select(Tables.MY_TABLE.NAME)
.select(Tables.MY_INNER_TABLE.ID)
.select(Tables.MY_INNER_TABLE.ALIAS)
.select(Tables.MY_INNER_TABLE.PARENT_ID)
.select(Tables.MY_INNER_TABLE.IS_MAIN)
.from(Tables.MY_TABLE)
.join(Tables.MY_INNER_TABLE)
.on(Tables.MY_TABLE.ID.eq(Tables.MY_INNER_TABLE.PARENT_ID))
.fetch(this::createMyDtoFromRecord);
}
private MyDto createMyDtoFromRecord(Record record) {
MyInnerDto myInnerDto = new MyInnerDto();
myInnerDto.setId(record.field(Tables.MY_INNER_TABLE.ID).getValue(record));
myInnerDto.setAlias(record.field(Tables.MY_INNER_TABLE.ALIAS).getValue(record));
myInnerDto.setParentId(record.field(Tables.MY_INNER_TABLE.PARENT_ID).getValue(record));
myInnerDto.setIsMain(record.field(Tables.MY_INNER_TABLE.IS_MAIN).getValue(record) == 1);
MyDto myDto = new MyDto();
myDto.setId(record.field(Tables.MY_TABLE.ID).getValue(record));
myDto.setName(record.field(Tables.MY_TABLE.NAME).getValue(record));
myDto.setInnerDtos(Collections.singletonList(myInnerDto));
return myDto;
}
and here are the Dtos
#Data
public class MyDto {
private Long id;
private String name;
private List<MyInnerDto> innerDtos;
}
#Data
public class MyInnerDto {
private Long id;
private String alias;
private Long parentId;
private Boolean isMain;
}
The first call MyDto1 has the list innerDtos of size 1 and with each call that falls to the cache this number goes up by 3 and I think the reason of it is because there are 3 parent dtos being returned in the query.
I've tried adding #EqualsAndHashCode to these dtos but when I add it the query now returns an empty list.
I'm sorry if this was asked before but I couldn't find it.
I found the problem and it was not related to JOOQ but it was about #Cacheable and using in memory caches.
I was using in memory cache and getting rid of the duplicates inside the service layer via putting the contents of the query inside a Map<Long, MyDto> to collect the MyInnerDto's under the same id. But the problem here is; in memory caches return the object itself meanwhile caches like Redis returns a copy of that object. So when I changed the cache object, it was directly changed inside the cache as well, hence the duplication issue.
To get rid of this problem here's the revised version of the query:
#Cacheable(CachingConfig.OPERATORS)
public List<MyDto> getAllOperatorsWithAliases() {
Map<MyDto, List<MyInnerDto>> result = create.select(Tables.MY_TABLE.ID)
.select(Tables.MY_TABLE.NAME)
.select(Tables.MY_INNER_TABLE.ID)
.select(Tables.MY_INNER_TABLE.ALIAS)
.select(Tables.MY_INNER_TABLE.PARENT_ID)
.select(Tables.MY_INNER_TABLE.IS_MAIN)
.from(Tables.MY_TABLE)
.join(Tables.MY_INNER_TABLE)
.on(Tables.MY_TABLE.ID.eq(Tables.MY_INNER_TABLE.PARENT_ID))
.fetchGroups(
r -> r.into(Tables.MY_TABLE).into(MyDto.class),
r -> r.into(Tables.MY_INNER_TABLE).into(MyInnerDto.class)
);
result.forEach(MyDto::setInnerDtos);
return new ArrayList<>(result.keySet());
}
I have a spring app, that pushes data in an s3 bucket.
public class Ebook implements Serializable {
#Column(name= "cover_path", unique = true, nullable = true)
private String coverPath;
private String coverDownloadUrl;
#Value("${aws.cloudfront.region}")
private String awsCloudFrontDns;
#PostLoad
public void init(){
// I want to access the property here
System.out.println("PostConstruct");
String coverDownloadUrl = "https://"+awsCloudFrontDns+"/"+coverPath;
}
When a data is pushed, let's say my cover here, I get the key 1/test-folder/mycover.jpg which is the important part of the future http URL of the data.
When I read the data from database, I enter inside #PostLoad method and I want construct the complete URL using the cloudfront value. This value changes frequently so we don't want to save hardly in the database.
How could I do to construct my full path just after reading the data in database?
The only way to do this is to use a service that update the data after using repository to read it? For readbyId it can be a good solution, but for reading list or using other jpa methods, this solutions won't work because I have each time to create a dedicated service for the update.
It doesn't look good for Entity to depend on property.
How about EntityListener.
#Component
public class EbookEntityListener {
#Value("${aws.cloudfront.region}")
private String awsCloudFrontDns;
#PostLoad
void postload(Ebook entity) { entity.updateDns(awsCloudFrontDns); }
}
I recommend trying this way :)
I need to store normalized (i.e. without special characters etc.) variants of some of the String fields of some entities.
An example:
#Entity
public class Car {
#Id
private Long id;
private String make;
private String model;
#OneToMany(cascade = CascadeType.ALL, orphanRemoval = true)
#JoinColumn(name = "CAR_ID")
private Set<NormalizedField> normalizedFields = new HashSet();
private Set<NormalizedField> createNormalizedFields(Car car) {
Set<NormalizedField> normalized = normalize(car);
this.normalizedFields.clear();
this.normalizedFields.addAll(normalized);
}
// I would use this approach, but it doesn't allow
// changes to related entities.
// #PreCreate
// public void onCreate() {
// createNormalizedFields();
// }
}
#Entity
public class NormalizedField {
#Id
private Long id;
private String fieldName;
private String normalizedValue;
}
It would be convenient if the normalized values were automatically (re)created whenever the Car entity is persisted. Is there a way to trigger the creation method automatically?
Using #PrePersist, #PreUpdate... is obviously not an option as it doesn't allow changes to related entities.
Spring AOP is not used in the project, so I would rather avoid introducing it for now. But it's an option anyways.
The application is huge, and managing the normalized values 'manually' would require quite a bit of work, hence I leave it as the last option.
Going to post this half-answer here ('half' because it provides a workaround with restrictions).
In some cases org.hibernate.Interceptor can be used to manage child entities whenever the parent entity is changed.
But there are restrictions: the javadoc says Session is not to be used in the Interceptor. JPA repository methods, JPQL or HQL calls are intercepted by the same Interceptor in a loop. Even native queries get intercepted unless you set FlushMode.COMMIT or FlushMode.MANUAL (and maybe some other).
The above means you'll probably have to use the datasource directly. I don't remember exactly how, but Spring provides means to execute queries using datasource directly and within current transaction. In my case it was enough as I had to manage some technical child entities that didn't need a representation as an Entity.
I am currently in the middle of migrating my REST-Server to GraphQL (at least partly). Most of the work is done, but i stumbled upon this problem which i seem to be unable to solve: OneToMany relationships in a graphql query, with FetchType.LAZY.
I am using:
https://github.com/graphql-java/graphql-spring-boot
and
https://github.com/graphql-java/graphql-java-tools for the integration.
Here is an example:
Entities:
#Entity
class Show {
private Long id;
private String name;
#OneToMany(mappedBy = "show")
private List<Competition> competition;
}
#Entity
class Competition {
private Long id;
private String name;
#ManyToOne(fetch = FetchType.LAZY)
private Show show;
}
Schema:
type Show {
id: ID!
name: String!
competitions: [Competition]
}
type Competition {
id: ID!
name: String
}
extend type Query {
shows : [Show]
}
Resolver:
#Component
public class ShowResolver implements GraphQLQueryResolver {
#Autowired
private ShowRepository showRepository;
public List<Show> getShows() {
return ((List<Show>)showRepository.findAll());
}
}
If i now query the endpoint with this (shorthand) query:
{
shows {
id
name
competitions {
id
}
}
}
i get:
org.hibernate.LazyInitializationException: failed to lazily initialize
a collection of role: Show.competitions, could not initialize proxy -
no Session
Now i know why this error happens and what it means, but i don't really know were to apply a fix for this. I don't want to make my entites to eagerly fetch all relations, because that would negate some of the advantages of GraphQL. Any ideas where i might need to look for a solution?
Thanks!
My prefered solution is to have the transaction open until the Servlet sends its response. With this small code change your LazyLoad will work right:
import javax.servlet.Filter;
import org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter;
#SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
/**
* Register the {#link OpenEntityManagerInViewFilter} so that the
* GraphQL-Servlet can handle lazy loads during execution.
*
* #return
*/
#Bean
public Filter OpenFilter() {
return new OpenEntityManagerInViewFilter();
}
}
I solved it and should have read the documentation of the graphql-java-tools library more carefully i suppose.
Beside the GraphQLQueryResolver which resolves the basic queries i also needed a GraphQLResolver<T> for my Showclass, which looks like this:
#Component
public class ShowResolver implements GraphQLResolver<Show> {
#Autowired
private CompetitionRepository competitionRepository;
public List<Competition> competitions(Show show) {
return ((List<Competition>)competitionRepository.findByShowId(show.getId()));
}
}
This tells the library how to resolve complex objects inside my Showclass and is only used if the initially query requests to include the Competitionobjects. Happy new Year!
EDIT 31.07.2019: I since stepped away from the solution below. Long running transactions are seldom a good idea and in this case it can cause problems once you scale your application. We started to implement DataLoaders to batch queries in an async matter. The long running transactions in combination with the async nature of the DataLoaders can lead to deadlocks: https://github.com/graphql-java-kickstart/graphql-java-tools/issues/58#issuecomment-398761715 (above and below for more information). I will not remove the solution below, because it might still be good starting point for smaller applications and/or applications which will not need any batched queries, but please keep this comment in mind when doing so.
EDIT: As requested here is another solution using a custom execution strategy. I am using graphql-spring-boot-starter and graphql-java-tools:
Create a Bean of type ExecutionStrategy that handles the transaction, like this:
#Service(GraphQLWebAutoConfiguration.QUERY_EXECUTION_STRATEGY)
public class AsyncTransactionalExecutionStrategy extends AsyncExecutionStrategy {
#Override
#Transactional
public CompletableFuture<ExecutionResult> execute(ExecutionContext executionContext, ExecutionStrategyParameters parameters) throws NonNullableFieldWasNullException {
return super.execute(executionContext, parameters);
}
}
This puts the whole execution of the query inside the same transaction. I don't know if this is the most optimal solution, and it also already has some drawbacks in regards to error handling, but you don't need to define a type resolver that way.
Notice that if this is the only ExecutionStrategy Bean present, this will also be used for mutations, contrary to what the Bean name might suggest. See https://github.com/graphql-java-kickstart/graphql-spring-boot/blob/v11.1.0/graphql-spring-boot-autoconfigure/src/main/java/graphql/kickstart/spring/web/boot/GraphQLWebAutoConfiguration.java#L161-L166 for reference. To avoid this define another ExecutionStrategy to be used for mutations:
#Bean(GraphQLWebAutoConfiguration.MUTATION_EXECUTION_STRATEGY)
public ExecutionStrategy queryExecutionStrategy() {
return new AsyncSerialExecutionStrategy();
}
For anyone confused about the accepted answer then you need to change the java entities to include a bidirectional relationship and ensure you use the helper methods to add a Competition otherwise its easy to forget to set the relationship up correctly.
#Entity
class Show {
private Long id;
private String name;
#OneToMany(cascade = CascadeType.ALL, mappedBy = "show")
private List<Competition> competition;
public void addCompetition(Competition c) {
c.setShow(this);
competition.add(c);
}
}
#Entity
class Competition {
private Long id;
private String name;
#ManyToOne(fetch = FetchType.LAZY)
private Show show;
}
The general intuition behind the accepted answer is:
The graphql resolver ShowResolver will open a transaction to get the list of shows but then it will close the transaction once its done doing that.
Then the nested graphql query for competitions will attempt to call getCompetition() on each Show instance retrieved from the previous query which will throw a LazyInitializationException because the transaction has been closed.
{
shows {
id
name
competitions {
id
}
}
}
The accepted answer is essentially
bypassing retrieving the list of competitions through the OneToMany relationship and instead creates a new query in a new transaction which eliminates the problem.
Not sure if this is a hack but #Transactional on resolvers doesn't work for me although the logic of doing that does make some sense but I am clearly not understanding the root cause.
For me using AsyncTransactionalExecutionStrategy worked incorrectly with exceptions. E.g. lazy init or app-level exception triggered transaction to rollback-only status. Spring transaction mechanism then threw on rollback-only transaction at the boundary of strategy execute, causing HttpRequestHandlerImpl to return 400 empty response. See https://github.com/graphql-java-kickstart/graphql-java-servlet/issues/250 and https://github.com/graphql-java/graphql-java/issues/1652 for more details.
What worked for me was using Instrumentation to wrap the whole operation in a transaction: https://spectrum.chat/graphql/general/transactional-queries-with-spring~47749680-3bb7-4508-8935-1d20d04d0c6a
I am assuming that whenever you fetch an object of Show, you want all the associated Competition of the Show object.
By default the fetch type for all collections type in an entity is LAZY. You can specify the EAGER type to make sure hibernate fetches the collection.
In your Show class you can change the fetchType to EAGER.
#OneToMany(cascade=CascadeType.ALL,fetch=FetchType.EAGER)
private List<Competition> competition;
You just need to annotate your resolver classes with #Transactional. Then, entities returned from repositories will be able to lazily fetch data.
The test below fails if I remove the first persist(). Why do I need to persist the NodeEntity in order for the Set to be instantiated? Is there some better way to do this? I don't want to have to write to the database more often than nessesary.
#Test
public void testCompetenceCreation() {
Competence competence = new Competence();
competence.setName("Testcompetence");
competence.persist(); //test fails if this line is removed
Competence competenceFromDb = competenceRepository.findOne(competence.getId());
assertEquals(competence.getName(), competenceFromDb.getName());
Education education = new Education();
education.setName("Bachelors Degree");
competence.addEducation(education);
competence.persist();
assertEquals(competence.getEducations(), competenceFromDb.getEducations());
}
If i remove the mentioned line, the exception bellow occurs:
Throws
java.lang.NullPointerException
at com.x.entity.Competence.addEducation(Competence.java:54)
Competence.class:
#JsonIgnoreProperties({"nodeId", "persistentState", "entityState"})
#NodeEntity
public class Competence {
#RelatedTo(type = "EDUCATION", elementClass = Education.class)
private Set<Education> educations;
public Set<Education> getEducations() {
return educations;
}
public void addEducation(Education education) {
this.educations.add(education);
}
}
Education.class
#JsonIgnoreProperties({"nodeId", "persistentState", "entityState"})
#NodeEntity
public class Education {
#GraphId
private Long id;
#JsonBackReference
#RelatedTo(type = "COMPETENCE", elementClass = Competence.class, direction = Direction.INCOMING)
private Competence competence;
#Indexed
private String name;
public Long getId() {
return id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
What version of SDN are you running?
Because up until the first persist the entity is detached and AJ doesn't take care of the fields (like creating the managed set). Persist creates the node at connects it to the entity, from then on until the transaction commits your entity is attached and all the changes will be written through.
It only writes to the db at commit, so no worries about too many writes. All the other changes will just be held in memory for your transaction. Probably you should also annotate the test method with #Transactional.
Can you create a JIRA issue for this? So that a consistent handling is provided. (Problem being that it probably also complains when you initialize the set yourself.)
Two other things:
as your relationship between Education<--Competence is probably the same and should just be navigated in the other direction you must provide the same type name in the annotation.
e.g. Education<-[:PROVIDES]-Competence
also if you don't call persist your entity will not be created and then the findOne by returning null