Spring Data Redis. JPA Repository findBy sometimes fails to fetch existing record - spring

I see some weird case. Sometimes my findBy...() method returns null instead of some object inserted and fetched successfully before. After that the needed object fetches fine. In other words sometimes the search is not working.
Spring Boot edition: 1.5.2.RELEASE
spring-boot-starter-data-redis: 1.5.22.RELEASE
"maxmemory-policy" setting is set to "noeviction"
My obj declaration:
#RedisHash("session")
public class Session implements Serializable {
#Id
private String id;
#Indexed
private Long internalChatId;
#Indexed
private boolean active;
#Indexed
private String chatId;
}
JPA Repository:
#Repository
public interface SessionRepository extends CrudRepository<Session, String> {
Session findByInternalChatIdAndActive(Long internalChatId, Boolean isActive);
}
Redis config:
#Bean
public LettuceConnectionFactory redisConnectionFactory(
RedisProperties redisProperties) {
return new LettuceConnectionFactory(
redisProperties.getRedisHost(),
redisProperties.getRedisPort());
}
#Bean
public RedisTemplate<?, ?> redisTemplate(LettuceConnectionFactory connectionFactory) {
RedisTemplate<byte[], byte[]> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
return template;
}
Thanx in advance for any assist.

We have recently seen similar behavior. In our scenario, we can have multiple threads that read and write to the same repository. Our null return occurs when one thread is doing a save to an object while another is doing a findById for that same object. The findById will occasionally fail. It appears that the save implementation does a delete followed by an add; if the findById gets in during the delete, the null result is returned.
We've had good luck so far in our test programs that can reproduce the null return using a Java Semaphore to gate all access (read, write, delete) to the repository. When the repository access methods are all gated by the same semaphore, we have not seen a null return. Our next step is to try adding the synchronized keyword to the methods in the class that access the repository (as an alternative to using the Semaphore).

This should not happen I don't what is reason. But you can use Option class and if it returns null at least you can avoid exception.
Something like:
Optional<Session> findByInternalChatIdAndActive(Long internalChatId, Boolean isActive);

Related

Throw error when properties marked with #JsonIgnore are passed

I have a requirement to mark certain properties in my REST beans as ignored using #JsonIgnore. (I am using Spring Boot). This helps in avoiding these properties in my Swagger REST documentation.
I also would like to ensure that if the client passes these properties, an error is sent back. I tried setting spring.jackson.deserialization.fail-on-unknown-properties=true, but that works only for properties that are truly unknown. The properties marked with #JsonIgnore passes through this check.
Is there any way to achieve this?
I think I found a solution -
If I add #JsonProperty(access = Access.READ_ONLY) to the field that is marked as #JsonIgnore, I get back a validation error. (I have also marked the property with #Null annotation. Here is the complete solution:
#JsonInclude(JsonInclude.Include.NON_NULL)
public class Employee {
#Null(message = "Id must not be passed in request")
private String id;
private String name;
//getters and setters
}
#JsonInclude(JsonInclude.Include.NON_NULL)
public class EmployeeRequest extends Employee {
#Override
#JsonIgnore
#JsonProperty(access = Access.READ_ONLY)
public void setId(String id) {
super.setId(id);
}
}
PS: By adding #JsonProperty(access = Access.READ_ONLY), the property started showing up in Swagger model I had to add #ApiModelProperty(hidden = true) to hide it again.
The create method takes EmployeeRequest as input (deserialization), and the get method returns Employee as response (serialization). If I pass id in create request, with the above solution, it gives me back a ConstraintViolation.
PS PS: Bummer. None of these solutions worked end-to-end. I ended up creating separate request and response beans - with no hierarchical relationship between them.

How to configure Spring Redis Configuration to use Hash instead of string serialization

For a Java POJO, I want to cache it to Redis using Spring's #Cacheable, #CachePut, and #CacheEvict, however I'd prefer to use Redis' Hash capabilities instead of just serializing the POJO into a String. Essentially, I would like to be able to use something like the ObjectHashMapper against the POJO so that POJO properties are automatically saved as Key/Value pairs in a single Redis entry. This behavior can be seen in the RedisRepository functionality which saves POJOs with the ObjectHashMapper, however, I prefer not to define the cache as a Repository, rather I want to use the Cache annotations.
I have successfully created some custom Spring/Redis configuration that helped me get String serialization working propertly. The Spring documentation for customizing the CacheManager and CacheConfiguration are very "light" and I've not found any relevant examples. I'm not sure whether I need a custom serializer, converter, or some other aspect of the CacheConfiguration. It seems like the serializers are more concerned with the individual keys and values, but I don't see where to configure to catch the entire Object and turn it into a Hash first.
Here is my Redis configuration. It is setup for two caches, "v", and "products", plus defaults to a StringRedisSerializer for other caches.
#Slf4j
#RequiredArgsConstructor
#Configuration
#EnableRedisRepositories(enableKeyspaceEvents=RedisKeyValueAdapter.EnableKeyspaceEvents.ON_STARTUP)
public class RedisConfig {
private final RedisConnectionFactory connectionFactory;
private static RedisCacheConfiguration createCacheConfiguration(long timeoutInSeconds, RedisSerializationContext.SerializationPair<?> serializationPair) {
logger.info("Creating CacheConfiguration with timeout of {} seconds", timeoutInSeconds);
return RedisCacheConfiguration.defaultCacheConfig()
.serializeValuesWith(serializationPair)
.entryTtl(Duration.ofSeconds(timeoutInSeconds));
}
#Bean
public RedisCacheManager cacheManager(RedisConnectionFactory connectionFactory) {
logger.info("Creating cache manager");
Map<String, RedisCacheConfiguration> cacheConfigurations = new HashMap<>();
cacheConfigurations.put("v",createCacheConfiguration(1200, RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer())));
cacheConfigurations.put("products",createCacheConfiguration(-1,RedisSerializationContext.SerializationPair.fromSerializer(new JdkSerializationRedisSerializer())));
return RedisCacheManager
.builder(connectionFactory)
.cacheDefaults(createCacheConfiguration(-1,RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer())))
.withInitialCacheConfigurations(cacheConfigurations)
.build();
}
}
Here is an example of how a POJO is serialized per the configuration above:
#Data
#Builder
#AllArgsConstructor
public class ProductSummary implements Serializable {
#Id
private String id;
private String accountId;
private String name;
private String roles;
private String groups;
}
and the way it is serialized:
\xac\xed\x00\x05sr\x004com.foobar.ProductSummaryh\xb3\x9d
\xd4\x0f\xac\xea\xb3\x02\x00\x05L\x00\taccountIdt\x00
\x12Ljava/lang/String;L\x00\x06groupsq\x00~\x00\x01L\x00
\x02idq\x00~\x00\x01L\x00\x04nameq\x00~\x00\x01L\x00
\x05rolesq\x00~\x00\x01xpt\x00\x19acctFv825MKt\x00
\nimwebuserst\x00\x13prod0lwJAWEYt\x00\x11ProductName/2020t\x00\x00
What I'd like is for it to be (in Redis as a Hash):
>HGETALL KEY
1) "_class"
2) "com.foobar.cache.CheckoutState"
3) "accountId"
4) "ACC000001"
5) "name"
6) "lorem ipsum whatever"
7) "roles"
8) "role1,role2,role3"
9) "groups"
10) "groupA,groupB"
with the hash Key being the id.
Guess this isn't that relevant anymore, but maybe somebody else want to do the same, like me.
It isn't that easy, as deep inside the implementation the DefaultRedisCacheWriter
uses "SET" - what you basically want is connection.hSet. As I see it the only way to achieve this would be to implement an own RedisCacheWriter.
This might be not enough, as the interface defines:
which is called by the redis cache manager:
Long story short, I assume an own "RedisCache" cache class might be the easiest way to go, using just a simple spring data redis "HASH" class or maybe a bit simpler using the RedisTemplate in combination with the spring ObjectHashMapper to created these hashes.

Unable to initialize lazy-loaded relationship inside of `#Transactional` method

I have a set of simple models like this (getters and setters omitted for brevity):
#Entity
public class Customer {
#Id
private Integer id;
}
#Entity
public class Order {
#Id
private Integer id;
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "customer_id")
private Customer customer;
}
I am trying to load an Order using a Spring JPA repository with a findById method, including the customer.
First I tried this:
#Transactional
Optional<Order> findById(Integer id) {
return repository.findById(id);
}
But when I tried to access Customer I got a LazyInitializationException: could not initialize proxy - no Session. So after referring to some other questions, I updated my method to be a bit uglier, but to explicitly call Hibernate.initialize:
#Transactional
Optional<Order> findById(Integer id) {
return repository.findById(id)
.map( order -> {
Hibernate.initialize(order.getCustomer());
return order;
);
}
But I still get org.hibernate.LazyInitializationException: could not initialize proxy - no Session. repository is a regular CrudRepository which provides the findById method out-of-the-box.
How can I initialize this lazily loaded child entity? My understanding is that the #Transactional indicates that I should still be within the transaction for the entirety of this method call. The only thing further downstream is the repository itself, which is just an interface, so I'm not sure how else to go about forcing the load of this child entity.
The Order entity and everything else in it is retrieved properly from the database; it's only when I try to get the lazy-loaded child entities that we start having issues.
The only way I managed to get this working was to write a custom hql method in the Repository using a left join fetch. While that works, it clutters up my repository with a method that is pretty much a duplicate of another and which I'm pretty sure I'm not actually supposed to need (so I would rather not do it this way.)
Spring-Boot 2.1.4.RELEASE, Spring 5.1.6.RELEASE, Hibernate 5.3.7.Final.
You have to define the method as public. See "Method visibility and #Transactional" in the spring docs.
This should work:
#Transactional
public Optional<Order> findById(Integer id) {
Optional<Order> order = repository.findById(id);
order.ifPresent(o -> Hibernate.initialize(o.getCustomer()));
return order;
}

How to explictly state that an Entity is new (transient) in JPA?

I am using a Spring Data JpaRepository, with Hibernate as JPA provider.
Normally when working directly with Hibernate, the decision between EntityManager#persist() and EntityManager#save() is up to the programmer. With Spring Data repositories, there is only save(). I do not want to discuss the pros and cons here. Let us consider the following, simple base class:
#MappedSuperclass
public abstract class PersistableObject {
#Id
private String id;
public PersistableObject(){
this.id = UUID.randomUUID().toString();
}
// hashCode() and equals() are implemented based on equality of 'id'
}
Using this base class, the Spring Data repository cannot tell which Entities are "new" (have not been saved to DB yet), as the regular check for id == null clearly does not work in this case, because the UUIDs are eagerly assigned to ensure the correctness of equals() and hashCode(). So what the repository seems to do is to always invoke EntityManager#merge() - which is clearly inefficient for transient entities.
The question is: how do I tell JPA (or Spring Data) that an Entity is new, such that it uses EntityManager#persist() instead of #merge() if possible?
I was thinking about something along these lines (using JPA lifecycle callbacks):
#MappedSuperclass
public abstract class PersistableObject {
#Transient
private boolean isNew = true; // by default, treat entity as new
#PostLoad
private void loaded(){
// a loaded entity is never new
this.isNew = false;
}
#PostPersist
private void saved(){
// a saved entity is not new anymore
this.isNew = false;
}
// how do I get JPA (or Spring Data) to use this method?
public boolean isNew(){
return this.isNew;
}
// all other properties, constructor, hashCode() and equals same as above
}
I'd like to add one more remark here. Even though it only works for Spring Data and not for general JPA, I think it's worth mentioning that Spring provides the Persistable<T> interface which has two methods:
T getId();
boolean isNew();
By implementing this interface (e.g. as in the opening question post), the Spring Data JpaRepositories will ask the entity itself if it is new or not, which can be pretty handy in certain cases.
Maybe you should add #Version column:
#Version
private Long version
in the case of new entity it will be null

OptimisticLockException not thrown when version has changed

I've created a simple EJB application that uses JPA for persistence and have a problem whereby optimistic locking is not functioning as I would have expected.
The application contains a class named Site which defines the model for a table named SITE in the database. The SITE table contains a column named ROW_VERSION which is referenced in the Site class using the #version annotation.
Whenever the record is updated, the ROW_VERSION is incremented by 1. So far, so good.
The problem arises when the row has changed in the time between the application reading the row using the EntityManager find method and the row being updated by the EntityManager merge method. As the ROW_VERSION for the row has been incremented by 1 and therefore is not the same as when the EntityManager find method was called, I would expect an OptimisticLockException to be thrown, but instead the changes are written to the table and in turn overwriting the changes made by the other process.
The application is running on WebSphere 8.5 and is using OpenJPA provided by the container.
Have I mis-understood how optimistic locking is supposed to work or is there something else that I need to do to make the OptimisticLockException occur?
The Site class is as follows:
#Entity
#Table(name="SITE")
public class Site {
#Id
#Column(name="SITE_ID")
private int id;
#Column(name="SITE_NAME")
private String siteName;
#Column(name="SITE_ADDRESS")
private String address;
#Column(name="ROW_VERSION")
#Version
private long rowVersion;
//getters and setters
}
The application makes uses of the Generic DAO wrapper class to invoke the EntityManager methods. The contents of the class are as follows:
public abstract class GenericDAO<T> {
private final static String UNIT_NAME = "Test4EJB";
#PersistenceContext(unitName = UNIT_NAME)
private EntityManager em;
private Class<T> entityClass;
public GenericDAO(Class<T> entityClass) {
this.entityClass = entityClass;
}
public T update(T entity) {
return em.merge(entity);
}
public T find(int entityID) {
return em.find(entityClass, entityID);
}
//other methods
}
Update - I've done some more investigation and have found this http://pic.dhe.ibm.com/infocenter/wasinfo/v8r0/index.jsp?topic=%2Fcom.ibm.websphere.nd.multiplatform.doc%2Finfo%2Fae%2Fae%2Fcejb_genversionID.html but even when I've added the #VersionColumn and #VersionStrategy annotations I still cannot get the OptimisticLockException to be thrown.

Resources