Strange behavior with SSE endpoint using Redis - spring

I need to push some data to the client if it is in Redis, but client keeps reconnecting to the SSE endpoint every 5 seconds.
The backend code:
#RestController
#RequestMapping("/reactive-task")
public class TaskRedisController {
private final TaskRedisRepository taskRedisRepository;
TaskRedisController(TaskRedisRepository taskRedisRepository){
this.taskRedisRepository = taskRedisRepository;
}
#CrossOrigin
#GetMapping(value = "/get/{id}")
public Flux<ServerSentEvent<Task>> getSseStream2(#PathVariable("id") String id) {
return taskRedisRepository.findByTaskId(id)
.map(task -> ServerSentEvent.<Task>builder().data(task).build());
}
}
#Repository
public class TaskRedisRepository {
public Flux<Task> findByTaskId(String id) {
return template.keys("task:" + id).flatMap(template.opsForValue()::get);
}
}
#AllArgsConstructor
#NoArgsConstructor
#Getter
#Setter
#EqualsAndHashCode
#Entity
public class Task {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
#Column(length = 25)
private String result;
}
The client consumes using JS:
var evt = new EventSource("http://localhost:8080/reactive-task/get/98"); evt.onmessage = function(event) {
console.log(event);
};
Can anyone point me in the right direction?
Any advice would be appreciated.
Update: I need to store data for some time (5-10 mins) in Redis.
Update: I wrote similar code on MongoDB and it works fine.

In this case, taskRedisRepository.findByTaskId(id) is probably sending a finite Flux - meaning several elements and a complete signal finishing the stream.
Spring WebFlux will interpret the onComplete signal as the end of the stream and will close it. The default behavior of browser SSE clients is to reconnect right away to the endpoint in case the connection has been terminated.
If you wish to keep a persistent connection and only be notified of new elements being added, you need to leverage that directly as a feature of your datastore. For Redis, this mode is supported with the pub/sub feature (see reference documentation).
To summarize, I think you're seeing the expected behavior as in this case your datastore won't produce an infinite stream notifying you of new elements added to the collection, but rather a finite stream of elements present in that collection at a given time.

Related

How to get actual child collection when updating parent

How can I get actual child collection, when adding new one in separated transactional method, while updating parent.
I have spring boot app with hibernate/jpa and one-to-many unidirectional model:
parent:
#Entity
public class Deal {
private UUID id;
#OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY)
private List<Rate> rates;
....
}
child:
#Entity
public class Rate {
private UUID id;
....
}
And I have non transactional method for do some business logic by rest call:
public Deal applyDeal(UUID dealId) {
dealService.apply(dealId);
return dealService.getById(dealId);
}
Method apply in DealService has several methods in separate transactions (all methods doLogic() annotated with #Transactional(Propagation.REQUIRES_NEW):
public void apply(UUI dealId) {
someService1.do1Logic(...);
someService2.do2Logic(...);
someService3.do3Logic(...);
}
In do2Logic() I have some logic that adding new Rate entity to my parent entity with dealId and direct call of save method for Deal object.
#Transactional(Propagation.REQUIRES_NEW)
publid void do2Logic(...) {
...
var deal = dealService.getById(...);
deal.getRates().add(new Rate());
dealService.save(deal);
}
But when I get response from root method applyDeal the new child entity is absent.
If after that I will try to get this parent in separate rest call (getDeal) I get actual parent entity with new child in collection.
How to get actual child collection in parent response of applyDeal method?
I tried to make all logic in one #Transactional but it doesn't works.
I also don't understand why when I am try to get deal instance to return in applyDeal I get old data.
Thank you.
I guess you are running MySQL or MariaDB? These two database by default use the repeatable read transaction isolation level, which can cause this behavior. Try configuring the read committed isolation level instead, and/or remove the REQUIRES_NEW propagation if possible, since that will suspend an already running transaction to start a second one.

#Cacheable duplicates inner list of DTO returned from JOOQ query

I'm new to JOOQ and Spring caching and using version 3.10.6. I'm trying to cache a query result so I don't need to go to database every time. The fetching of the query goes smoothly there is no problem in that but when you execute this query again, it goes to the cache which has duplicate records in the inner lists. Also every time this query is called and it falls to the cache, the duplications grow in number. Now I can put a Set instead of a List but I want to know why this duplication occurs.
Here is my JooqRepo method
#Cacheable(CachingConfig.OPERATORS)
public List<MyDto> getAllOperatorsWithAliases() {
return create.select(Tables.MY_TABLE.ID)
.select(Tables.MY_TABLE.NAME)
.select(Tables.MY_INNER_TABLE.ID)
.select(Tables.MY_INNER_TABLE.ALIAS)
.select(Tables.MY_INNER_TABLE.PARENT_ID)
.select(Tables.MY_INNER_TABLE.IS_MAIN)
.from(Tables.MY_TABLE)
.join(Tables.MY_INNER_TABLE)
.on(Tables.MY_TABLE.ID.eq(Tables.MY_INNER_TABLE.PARENT_ID))
.fetch(this::createMyDtoFromRecord);
}
private MyDto createMyDtoFromRecord(Record record) {
MyInnerDto myInnerDto = new MyInnerDto();
myInnerDto.setId(record.field(Tables.MY_INNER_TABLE.ID).getValue(record));
myInnerDto.setAlias(record.field(Tables.MY_INNER_TABLE.ALIAS).getValue(record));
myInnerDto.setParentId(record.field(Tables.MY_INNER_TABLE.PARENT_ID).getValue(record));
myInnerDto.setIsMain(record.field(Tables.MY_INNER_TABLE.IS_MAIN).getValue(record) == 1);
MyDto myDto = new MyDto();
myDto.setId(record.field(Tables.MY_TABLE.ID).getValue(record));
myDto.setName(record.field(Tables.MY_TABLE.NAME).getValue(record));
myDto.setInnerDtos(Collections.singletonList(myInnerDto));
return myDto;
}
and here are the Dtos
#Data
public class MyDto {
private Long id;
private String name;
private List<MyInnerDto> innerDtos;
}
#Data
public class MyInnerDto {
private Long id;
private String alias;
private Long parentId;
private Boolean isMain;
}
The first call MyDto1 has the list innerDtos of size 1 and with each call that falls to the cache this number goes up by 3 and I think the reason of it is because there are 3 parent dtos being returned in the query.
I've tried adding #EqualsAndHashCode to these dtos but when I add it the query now returns an empty list.
I'm sorry if this was asked before but I couldn't find it.
I found the problem and it was not related to JOOQ but it was about #Cacheable and using in memory caches.
I was using in memory cache and getting rid of the duplicates inside the service layer via putting the contents of the query inside a Map<Long, MyDto> to collect the MyInnerDto's under the same id. But the problem here is; in memory caches return the object itself meanwhile caches like Redis returns a copy of that object. So when I changed the cache object, it was directly changed inside the cache as well, hence the duplication issue.
To get rid of this problem here's the revised version of the query:
#Cacheable(CachingConfig.OPERATORS)
public List<MyDto> getAllOperatorsWithAliases() {
Map<MyDto, List<MyInnerDto>> result = create.select(Tables.MY_TABLE.ID)
.select(Tables.MY_TABLE.NAME)
.select(Tables.MY_INNER_TABLE.ID)
.select(Tables.MY_INNER_TABLE.ALIAS)
.select(Tables.MY_INNER_TABLE.PARENT_ID)
.select(Tables.MY_INNER_TABLE.IS_MAIN)
.from(Tables.MY_TABLE)
.join(Tables.MY_INNER_TABLE)
.on(Tables.MY_TABLE.ID.eq(Tables.MY_INNER_TABLE.PARENT_ID))
.fetchGroups(
r -> r.into(Tables.MY_TABLE).into(MyDto.class),
r -> r.into(Tables.MY_INNER_TABLE).into(MyInnerDto.class)
);
result.forEach(MyDto::setInnerDtos);
return new ArrayList<>(result.keySet());
}

Challenge Persisting Complex Entity using Spring Data JDBC

Considering the complexities involved in JPA we are planning to use Spring Data JDBC for our entities for its simplicity. Below is the sample structure and we have up to 6 child entities. We are able to successfully insert the data into various of these entities with proper foreign key mappings.
Challenge:- We have a workflow process outside of this application that periodically updates the "requestStatus" in the "Request" entity and this is the only field that gets updated after the Request is created. As with spring data JDBC, during the update it deletes all referenced entities and recreates(inserts) it again. This is kind of a heavy operation considering 6 child entities. Are there any workaround or suggestion in terms of how to handle these scenarios
#Table("Request")
public class Request {
private String requestId; // generated in the Before Save Listener .
private String requestStatus;
#Column("requestId")
private ChildEntity1 childEntity1;
public void addChildEntity1(ChildEntity1 childEntityobj) {
this.childEntity1 = childEntityobj;
}
}
#Table("Child_Entity1")
public class ChildEntity1 {
private String entity1Id; // Auto increment on DB
private String name;
private String SSN;
private String requestId;
#MappedCollection(column = "entity1Id", keyColumn = "entity2Id")
private ArrayList<ChildEntity2> childEntity2List = new ArrayList<ChildEntity2>();
#MappedCollection(column = "entity1Id", keyColumn = "entity3Id")
private ArrayList<ChildEntity3> childEntity3List = new ArrayList<ChildEntity3>();
public void addChildEntity2(ChildEntity2 childEntity2obj) {
childEntity2List.add(childEntity2obj);
}
public void addChildEntity3(ChildEntity3 childEntity3obj) {
childEntity3List.add(childEntity3obj);
}
}
#Table("Child_Entity2")
public class ChildEntity2 {
private String entity2Id; // Auto increment on DB
private String partyTypeCode;
private String requestId;
}
#Table(Child_Entity3)
public class ChildEntity3 {
private String entity3Id; // Auto increment on DB
private String PhoneCode;
private String requestId;
}
#Test
public void createandsaveRequest() {
Request newRequest = createRequest(); // using builder to build the object
newRequest.addChildEntity1(createChildEntity1());
newRequest.getChildEntity1().addChildEntity2(createChildEntity2());
newRequest.getChildEntity1().addChildEntity3(createChildEntity3());
requestRepository.save(newRequest);
}
The approach you describe in your comment:
Have a dedicated method performing exactly that update-statement is the right way to do this.
You should be aware though that this does ignore optimistic locking.
So there is a risk that the following might happen
Thread/Session 1: reads an aggregate.
Thread/Session 2: updates a single field as per your question.
Thread/Session 1: writes the aggregate, possibly with other changes, overwriting the change made by Session 2.
To avoid this or similar problems you need to
check that the version of the aggregate root is unchanged from when you loaded it, in order to guarantee that the method doesn't write conflicting changes.
increment the version in order to guarantee that nothing else overwrites the changes made in this method.
This might mean that you need two or more SQL statements which probably means you have to fallback even more to a full custom method where you implement this, probably using an injected JdbcTemplate.

Same Generic commit object getting saved from different instances

I am using Javers version 5.1.2, with jdk 11, in my application, where I am committing Generic Object T and saving into mongodb. The Generic commit objects are actually created from generic rest service, where user can pass any Json.
Every thing is going fine on single instance. Whenever any re commit is sent with same request, Javers commit.getChanges().isEmpty() method returns true.
Issues:
1) Whenever same request to sent to different instance, commit.getChanges().isEmpty() method returns false.
2) If I commit one request, and restart the instance and then again commit, commit.getChanges().isEmpty() again returns false. Instead of true.
As a result of above issue, new version is getting created if request goes to different new instance or instance is restarted.
Could you please let me know, how we can handle this issue.
I will extract code from the project and will create a sample running project and share.
Right now, I can share few classes, please see, if these help:
//---------------------Entitiy Class:
import java.util.Map;
import lombok.AllArgsConstructor;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import lombok.ToString;
#AllArgsConstructor
#NoArgsConstructor
#ToString
public class ClientEntity<T> {
#Getter
#Setter
private String entityId;
#Getter
#Setter
private T commitObj;
#Getter
#Setter
private String authorName;
#Getter
#Setter
private boolean major;
#Getter
#Setter
private Map<String, String> commitProperties;
}
//--------DataIntegrator
#Service
public class DataIntegrator {
private final Javers javers;
private IVersionRepository versionDao;
private IdGenerator idGenerator;
#Inject
public DataIntegrator(Javers javers, IVersionRepository versionDao, IdGenerator idGenerator) {
this.javers = javers;
this.versionDao = versionDao;
this.idGenerator = idGenerator;
}
public <T> String commit(ClientEntity<T> clientObject) {
CommitEntity commitEntity = new CommitEntity();
commitEntity.setEntityId(clientObject.getEntityId());
commitEntity.setEntityObject(clientObject.getCommitObj());
Map<String, String> commitProperties = new HashMap<>();
commitProperties.putAll(clientObject.getCommitProperties());
commitProperties.put(commit_id_property_key, clientObject.getEntityId());
commitProperties.putAll(idGenerator.getEntityVersions(clientObject.getEntityId(), clientObject.isMajor()));
Commit commit = javers.commit(clientObject.getAuthorName(), commitEntity, commitProperties);
if (commit.getChanges().isEmpty()) {
return "No Changes Found";
}
versionDao.save(
new VersionHead(clientObject.getEntityId(), Long.parseLong(commitProperties.get(major_version_id_key)),
Long.parseLong(commitProperties.get(minor_version_id_key))));
return commit.getProperties().get(major_version_id_key) + ":"
+ commit.getProperties().get(minor_version_id_key);
}
}
1) commitObj is a Generic object, in ClientEntity, which holds Json coming from the Rest webService. The JSON can be any valid json. Can have nested structure also.
2) After calling javers.commit method, we are checking if it is existing entity or there is any change using commit.getChanges().isEmpty().
If same second request goes to same instance, it returns true for change, as expected
If same second request goes to different instance, under load balancer, it takes it as different request and commit.getChanges().isEmpty() returns false. Expected response should be true, as it is same version.
If after first request, I restart instance, and make a same request, it returns false, instead of true, which means, getChanges method taking the same request as same.

LazyInitializationException with graphql-spring

I am currently in the middle of migrating my REST-Server to GraphQL (at least partly). Most of the work is done, but i stumbled upon this problem which i seem to be unable to solve: OneToMany relationships in a graphql query, with FetchType.LAZY.
I am using:
https://github.com/graphql-java/graphql-spring-boot
and
https://github.com/graphql-java/graphql-java-tools for the integration.
Here is an example:
Entities:
#Entity
class Show {
private Long id;
private String name;
#OneToMany(mappedBy = "show")
private List<Competition> competition;
}
#Entity
class Competition {
private Long id;
private String name;
#ManyToOne(fetch = FetchType.LAZY)
private Show show;
}
Schema:
type Show {
id: ID!
name: String!
competitions: [Competition]
}
type Competition {
id: ID!
name: String
}
extend type Query {
shows : [Show]
}
Resolver:
#Component
public class ShowResolver implements GraphQLQueryResolver {
#Autowired
private ShowRepository showRepository;
public List<Show> getShows() {
return ((List<Show>)showRepository.findAll());
}
}
If i now query the endpoint with this (shorthand) query:
{
shows {
id
name
competitions {
id
}
}
}
i get:
org.hibernate.LazyInitializationException: failed to lazily initialize
a collection of role: Show.competitions, could not initialize proxy -
no Session
Now i know why this error happens and what it means, but i don't really know were to apply a fix for this. I don't want to make my entites to eagerly fetch all relations, because that would negate some of the advantages of GraphQL. Any ideas where i might need to look for a solution?
Thanks!
My prefered solution is to have the transaction open until the Servlet sends its response. With this small code change your LazyLoad will work right:
import javax.servlet.Filter;
import org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter;
#SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
/**
* Register the {#link OpenEntityManagerInViewFilter} so that the
* GraphQL-Servlet can handle lazy loads during execution.
*
* #return
*/
#Bean
public Filter OpenFilter() {
return new OpenEntityManagerInViewFilter();
}
}
I solved it and should have read the documentation of the graphql-java-tools library more carefully i suppose.
Beside the GraphQLQueryResolver which resolves the basic queries i also needed a GraphQLResolver<T> for my Showclass, which looks like this:
#Component
public class ShowResolver implements GraphQLResolver<Show> {
#Autowired
private CompetitionRepository competitionRepository;
public List<Competition> competitions(Show show) {
return ((List<Competition>)competitionRepository.findByShowId(show.getId()));
}
}
This tells the library how to resolve complex objects inside my Showclass and is only used if the initially query requests to include the Competitionobjects. Happy new Year!
EDIT 31.07.2019: I since stepped away from the solution below. Long running transactions are seldom a good idea and in this case it can cause problems once you scale your application. We started to implement DataLoaders to batch queries in an async matter. The long running transactions in combination with the async nature of the DataLoaders can lead to deadlocks: https://github.com/graphql-java-kickstart/graphql-java-tools/issues/58#issuecomment-398761715 (above and below for more information). I will not remove the solution below, because it might still be good starting point for smaller applications and/or applications which will not need any batched queries, but please keep this comment in mind when doing so.
EDIT: As requested here is another solution using a custom execution strategy. I am using graphql-spring-boot-starter and graphql-java-tools:
Create a Bean of type ExecutionStrategy that handles the transaction, like this:
#Service(GraphQLWebAutoConfiguration.QUERY_EXECUTION_STRATEGY)
public class AsyncTransactionalExecutionStrategy extends AsyncExecutionStrategy {
#Override
#Transactional
public CompletableFuture<ExecutionResult> execute(ExecutionContext executionContext, ExecutionStrategyParameters parameters) throws NonNullableFieldWasNullException {
return super.execute(executionContext, parameters);
}
}
This puts the whole execution of the query inside the same transaction. I don't know if this is the most optimal solution, and it also already has some drawbacks in regards to error handling, but you don't need to define a type resolver that way.
Notice that if this is the only ExecutionStrategy Bean present, this will also be used for mutations, contrary to what the Bean name might suggest. See https://github.com/graphql-java-kickstart/graphql-spring-boot/blob/v11.1.0/graphql-spring-boot-autoconfigure/src/main/java/graphql/kickstart/spring/web/boot/GraphQLWebAutoConfiguration.java#L161-L166 for reference. To avoid this define another ExecutionStrategy to be used for mutations:
#Bean(GraphQLWebAutoConfiguration.MUTATION_EXECUTION_STRATEGY)
public ExecutionStrategy queryExecutionStrategy() {
return new AsyncSerialExecutionStrategy();
}
For anyone confused about the accepted answer then you need to change the java entities to include a bidirectional relationship and ensure you use the helper methods to add a Competition otherwise its easy to forget to set the relationship up correctly.
#Entity
class Show {
private Long id;
private String name;
#OneToMany(cascade = CascadeType.ALL, mappedBy = "show")
private List<Competition> competition;
public void addCompetition(Competition c) {
c.setShow(this);
competition.add(c);
}
}
#Entity
class Competition {
private Long id;
private String name;
#ManyToOne(fetch = FetchType.LAZY)
private Show show;
}
The general intuition behind the accepted answer is:
The graphql resolver ShowResolver will open a transaction to get the list of shows but then it will close the transaction once its done doing that.
Then the nested graphql query for competitions will attempt to call getCompetition() on each Show instance retrieved from the previous query which will throw a LazyInitializationException because the transaction has been closed.
{
shows {
id
name
competitions {
id
}
}
}
The accepted answer is essentially
bypassing retrieving the list of competitions through the OneToMany relationship and instead creates a new query in a new transaction which eliminates the problem.
Not sure if this is a hack but #Transactional on resolvers doesn't work for me although the logic of doing that does make some sense but I am clearly not understanding the root cause.
For me using AsyncTransactionalExecutionStrategy worked incorrectly with exceptions. E.g. lazy init or app-level exception triggered transaction to rollback-only status. Spring transaction mechanism then threw on rollback-only transaction at the boundary of strategy execute, causing HttpRequestHandlerImpl to return 400 empty response. See https://github.com/graphql-java-kickstart/graphql-java-servlet/issues/250 and https://github.com/graphql-java/graphql-java/issues/1652 for more details.
What worked for me was using Instrumentation to wrap the whole operation in a transaction: https://spectrum.chat/graphql/general/transactional-queries-with-spring~47749680-3bb7-4508-8935-1d20d04d0c6a
I am assuming that whenever you fetch an object of Show, you want all the associated Competition of the Show object.
By default the fetch type for all collections type in an entity is LAZY. You can specify the EAGER type to make sure hibernate fetches the collection.
In your Show class you can change the fetchType to EAGER.
#OneToMany(cascade=CascadeType.ALL,fetch=FetchType.EAGER)
private List<Competition> competition;
You just need to annotate your resolver classes with #Transactional. Then, entities returned from repositories will be able to lazily fetch data.

Resources