Reading same cache from different microservices - spring-boot

Am using spring-data-redis(2.1.5.RELEASE) and jedis(2.10.2) client to connect to my azure redis instance from different services running as spring-boot application.
Two services has the same caching methods and pointed to the same cache by implementing the following configuration. The problem am facing is when one service trying to read a cached value created by another service, de-seralization exception occurs.
Exception:
org.springframework.data.redis.serializer.SerializationException: Cannot deserialize; nested exception is org.springframework.core.serializer.support.SerializationFailedException: Failed to deserialize payload. Is the byte array a result of corresponding serialization for DefaultDeserializer?; nested exception is org.springframework.core.NestedIOException: Failed to deserialize object type; nested exception is java.lang.ClassNotFoundException
Note: Am using redis only to cache the data read from my database.
Redis Cache Configuration of microservice 1
public RedisCacheWriter redisCacheWriter(RedisConnectionFactory connectionFactory) {
return RedisCacheWriter.nonLockingRedisCacheWriter(connectionFactory);
}
#Bean
public RedisCacheManager cacheManager() {
Map<String, RedisCacheConfiguration> cacheNamesConfigurationMap = new HashMap<>();
cacheNamesConfigurationMap.put("employers", RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(90000)));
cacheNamesConfigurationMap.put("employees", RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(90000)));
RedisCacheManager manager = new RedisCacheManager(redisCacheWriter(), RedisCacheConfiguration.defaultCacheConfig(), cacheNamesConfigurationMap);
manager.setTransactionAware(true);
manager.afterPropertiesSet();
return manager;
}
Redis Cache Configuration of microservice 2
public RedisCacheWriter redisCacheWriter(RedisConnectionFactory connectionFactory) {
return RedisCacheWriter.nonLockingRedisCacheWriter(connectionFactory);
}
#Bean
public RedisCacheManager cacheManager() {
Map<String, RedisCacheConfiguration> cacheNamesConfigurationMap = new HashMap<>();
cacheNamesConfigurationMap.put("employees", RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(90000)));
RedisCacheManager manager = new RedisCacheManager(redisCacheWriter(), RedisCacheConfiguration.defaultCacheConfig(), cacheNamesConfigurationMap);
manager.setTransactionAware(true);
manager.afterPropertiesSet();
return manager;
}
Caching methods in both services
#Cacheable(value = "employees", key = "#employeesId")
public Employee getEmployee(String employeesId) {
//methods
}
Employee class in both services
public class Employee implements Serializable {
private String id;
private String name;
}

Either make sure the returned (de(serialized)) object is in exact same package or prevent registering the class in the data. This happens because the class name with full package qualified name are set into the data (JSON in my case). Here are the configuration I used to
Do not get restricted using same class name of a dto !
Use typed de-(serialization) and not generic types for converting from/to JSON
Remove the class mark put in the data
import lombok.RequiredArgsConstructor;
import org.springframework.boot.autoconfigure.cache.RedisCacheManagerBuilderCustomizer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.cache.RedisCacheConfiguration;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.RedisSerializationContext.SerializationPair;
import java.time.Duration;
import java.util.HashMap;
import java.util.Map;
#Configuration
#RequiredArgsConstructor
public class RedisConfig {
#Bean
RedisCacheManagerBuilderCustomizer redisCacheManagerBuilderCustomizer() {
return builder -> {
Map<String, RedisCacheConfiguration> configurationMap = new HashMap<>();
configurationMap.put("whatever-cache-name",
RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofSeconds(3600))
.serializeValuesWith(
SerializationPair.fromSerializer(new Jackson2JsonRedisSerializer<>(Employee.class))
)
);
builder.withInitialCacheConfigurations(configurationMap);
};
}
}
To only
Remove the class mark put in the data
Just use instead a different serializer
#AutoWired
ObjectMapper objectMapper;
#Bean
RedisCacheManagerBuilderCustomizer redisCacheManagerBuilderCustomizer() {
return builder -> {
Map<String, RedisCacheConfiguration> configurationMap = new HashMap<>();
configurationMap.put("whatever-cache-name",
RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofSeconds(3600))
.serializeValuesWith(
SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer(objectMapper))
)
);
builder.withInitialCacheConfigurations(configurationMap);
};
}
Note that by default the GenericJackson2JsonRedisSerializer creates new ObjectMapper if you don't provide one in the constructor and the default implementation will add the full qualified class name in the data and that's why I used the one provided by Spring as it was properly configured enough in that case.

I would suggest you make sure the Employee classes are identical and then add a serialVersionUID field to the classes. Clear the Redis cache and try again.
This is from the Javadocs in the java.io.Serializable interface:
If a serializable class does not explicitly declare a serialVersionUID, then
the serialization runtime will calculate a default serialVersionUID value
for that class based on various aspects of the class, as described in the
Java(TM) Object Serialization Specification. However, it is strongly
recommended that all serializable classes explicitly declare
serialVersionUID values, since the default serialVersionUID computation is
highly sensitive to class details that may vary depending on compiler
implementations, and can thus result in unexpected
InvalidClassExceptions during deserialization. Therefore, to
guarantee a consistent serialVersionUID value across different java compiler
implementations, a serializable class must declare an explicit
serialVersionUID value. It is also strongly advised that explicit
serialVersionUID declarations use the private modifier where
possible, since such declarations apply only to the immediately declaring
class--serialVersionUID fields are not useful as inherited members.

Related

SpEL KafkaListener. How can i inject custom deserializer through properties?

I am using spring.
I have a configured ObjectMapper for the entire project and I use it to set up a kafka deserializer.
And then I need a custom kafka deserializer to be used in KafkaListener.
I'm configuring KafkaListener via autoconfiguration, not via #Configuration class.
#Component
#RequiredArgsConstructor
public class CustomMessageDeserializer implements Deserializer<MyMessage> {
private final ObjectMapper objectMapper;
#SneakyThrows
#Override
public MyMessage deserialize(String topic, byte[] data) {
return objectMapper.readValue(data, MyMessage.class);
}
}
If i do like this
#KafkaListener(
topics = {"${topics.invite-user-topic}"},
properties = {"value.deserializer=com.service.deserializer.CustomMessageDeserializer"}
)
public void receiveInviteUserMessages(MyMessage myMessage) {}
I received KafkaException: Could not find a public no-argument constructor
But with public no-argument constructor in CustomMessageDeserializer class i am getting NPE because ObjectMapper = null. It creates and uses a new class, not a spring component.
#KafkaListener supports SpEL expressions.
And I think that this problem can be solved using SpEL.
Do you have any idea how to inject spring bean CustomMessageDeserializer with SpEL?
There are no easy ways to do it with SPeL.
Analysis
To get started, see the JavaDoc for #KafkaListener#properties:
/**
*
* SpEL expressions must resolve to a String ...
*/
The value of value.deserializer is used to instantiate the specified deserializer class. Let's follow the call chain:
You specify this value in the #KafkaListener annotation, then you are probably not creating a bean of the ConsumerFactory.class. So Spring creates this bean class itself - see KafkaAutoConfiguration#kafkaConsumerFactory.
Next is the creation of the returned object new DefaultKafkaConsumerFactory(...) as ConsumerFactory<?,?> using the constructor for default delivery expressions keyDeserializer/valueDeserializer = () -> null
This factory is used to create a Kafka consumer (The entry point is the constructor KafkaMessageListenerContainer#ListenerConsumer, then KafkaMessageListenerContainer.this.consumerFactory.createConsumer...)
In the KafkaConsumer constructor, the valueDeserializer object is being created, because it is null (for the default factory of point 2 above):
if (valueDeserializer == null) {
this.valueDeserializer = config.getConfiguredInstance(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, Deserializer.class);
The implementation of config.getConfiguredInstance involves instantiating your deserializer class via a parameterless constructor using reflection and your String "com.service.deserializer.CustomMessageDeserializer" class name
Solutions
To use value.deserializer with your customized ObjectMapper, you must create the ConsumerFactory bean yourself using the setValueDeserializer(...) method. This is also mentioned in the second Important part of the JSON.Mapping_Types.Important documentation
If you don't want to create a ConsumerFactory bean, and also don't have complicated logic in your deserializer (you only have return objectMapper.readValue(data, MyMessage.class);), then register DefaultKafkaConsumerFactoryCustomizer:
#Bean
// inject your custom objectMapper
public DefaultKafkaConsumerFactoryCustomizer customizeJsonDeserializer(ObjectMapper objectMapper) {
return consumerFactory ->
consumerFactory.setValueDeserializerSupplier(() ->
new org.springframework.kafka.support.serializer.JsonDeserializer<>(objectMapper));
}
In this case, you don't need to create your own CustomMessageDeserializer class (remove it) and Spring will automatically parse the message into your MyMessage.
#KafkaListener annotation should also not contains the property properties = {"value.deserializer=com.my.kafka_test.component.CustomMessageDeserializer"}. This DefaultKafkaConsumerFactoryCustomizer bean will automatically be used to configure the default ConsumerFactory<?, ?> (see the implementation of the KafkaAutoConfiguration#kafkaConsumerFactory method)
Here how it works for me:
#KafkaListener(topics = "${solr.kafka.topic}", containerFactory = "batchFactory")
public void listen(List<SolrInputDocument> docs, #Header(KafkaHeaders.BATCH_CONVERTED_HEADERS) List<Map<String, Object>> headers, Acknowledgment ack) throws IOException {...}
And then I have 2 beans defined in my Configuration
#Profile("!test")
#Bean
#Autowired
public ConsumerFactory<String, SolrInputDocument> consumerFactory(KafkaProperties properties) {
Map<String, Object> props = properties.buildConsumerProperties();
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
DefaultKafkaConsumerFactory<String, SolrInputDocument> result = new DefaultKafkaConsumerFactory<>(props);
String validatedKeyDeserializerName = KafkaMessageType.valueOf(keyDeserializerName).toString();
ZiDeserializer<SolrInputDocument> deserializer = ZiDeserializerFactory.getInstance(validatedKeyDeserializerName);
result.setValueDeserializer(deserializer);
return result;
}
#Profile("!test")
#Bean
#Autowired
public ConcurrentKafkaListenerContainerFactory<String, SolrInputDocument> batchFactory(ConsumerFactory<String, SolrInputDocument> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, SolrInputDocument> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.setBatchListener(true);
factory.setConcurrency(2);
ExponentialBackOffWithMaxRetries backoff = new ExponentialBackOffWithMaxRetries(10);
backoff.setMultiplier(3); // Default is 1.5 but this seems more reasonable
factory.setCommonErrorHandler(new DefaultErrorHandler(null, backoff));
// Needed for manual commits
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
Note that the interface ZiDeserializer<SolrInputDocument> deserializeris my interface and ZiDeserializerFactory.getInstance(validatedKeyDeserializerName); returns my custom implementation of ZiDeserializer. And ZiDeserializer extends org.apache.kafka.common.serialization.Deserializer. This works for me

Avoiding code repetition on multischema database

I have a legacy application with a database that splits up the data into multiple schemas on the same physical database. The schemas are identical in structure.
I use a microservice using Spring Boot Data JPA to do work on a single schema. Then to avoid code repetition, I created a router service that forwards the request to the single schema microservice replica each with a different database connection. But I found that a bit overkill (but works)
I am trying to reduce it back down to a single microservice. I haven't been successful yet, but I set up the tables with the schema property.
#Table(
name = "alerts",
schema = "ca"
)
However, it gets confused when I try to do inheritance and #MappedSuperclass to reduce the code duplication.
In addition the #OneToMany breaks apart because of the inheritance getting errors like X references an unknown entity: Y
Basically is there a way of using inheritance on JPA that uses the same table structure with the difference being just the schema without copy and pasting too much code. Ideally I'd like to just pass a "schema" parameter to a DAO and it somehow does it for me.
In the end, we just need a data source that would route according to the situation. To do this a #Component that extends AbstractRoutingDataSource is used and a ThreadLocal to store the request context.
The ThreadLocal would be something like this (examples are using Lombok)
#AllArgsConstructor
public class UserContext {
private static final ThreadLocal<UserContext> context =
new ThreadLocal<>();
private final String schema;
public static String getSchema() {
return context.get().schema;
}
public static void setFromXXX(...) {
context.set(new UserContext(
...
));
}
}
A source for the data sources would be needed:
#Configuration
public class DataSources {
#Bean
public DataSource schema1() {
return build("schema1");
}
#Bean
public DataSource schema2() {
return build("schema2");
}
private DataSource buildDataSource(String schema) {
...
return new DriverManagerDataSource(url, username, password);
}
}
And finally the router which is marked as the #Primary data source to make sure it is the one that gets used by JPA.
#Component
#Primary
public class RoutingDatasource extends AbstractRoutingDataSource {
#Autowired
#Qualifier("schema1")
private DataSource schema1;
#Autowired
#Qualifier("schema2")
private DataSource schema2;
#Override
public void afterPropertiesSet() {
setTargetDataSources(
Map.of(
"schema1", schema1,
"schema2", schema2
)
);
super.afterPropertiesSet();
}
#Override
protected Object determineCurrentLookupKey() {
return UserContext.getSchema();
}
}
This avoids the code duplication when all that is different is a schema or even a data source.

Injecting configuration dependency

I am creating a cache client wrapper using spring framework. This is to provide cache layer to our application. Right now, we are using redis. I have found out that spring-data-redis library is very good for creating my wrapper.
My application will pass a configuration POJO to my wrapper and will then use the interface that I will provide.
spring-data-redis provides an easy way to access redis using two variables.
RedisConnectionFactory
RedisTemplate<String, Object>
Although, I will be providing a better interface to my application with my interface functions like:
public Object getValue( final String key ) throws ConfigInvalidException;
public void setValue( final String key, final Object value ) throws ConfigInvalidException;
public void setValueWithExpiry(final String key, final Object value, final int seconds, final TimeUnit timeUnit) throws ConfigInvalidException;
I still want to provide RedisConnectionFactory and RedisTemplate beans.
My question is how to initialize my wrapper application with this configuration POJO?
Currently my configuration looks like this:
import java.util.List;
public class ClusterConfigurationProperties {
List<String> nodes;
public List<String> getNodes() {
return nodes;
}
public void setNodes(List<String> nodes) {
this.nodes = nodes;
}
}
And my AppConfig.java looks like this:
import com.ajio.Exception.ConfigInvalidException;
import com.ajio.configuration.ClusterConfigurationProperties;
import com.ajio.validator.Validator;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisClusterConfiguration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.GenericToStringSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
#Configuration
public class AppConfig {
#Autowired
private ClusterConfigurationProperties clusterConfigurationProperties;
#Autowired
private Validator validator;
#Bean
ClusterConfigurationProperties clusterConfigurationProperties() {
return null;
}
#Bean
Validator validator() {
return new Validator();
}
#Bean
RedisConnectionFactory connectionFactory() throws ConfigInvalidException {
if (clusterConfigurationProperties == null)
throw new ConfigInvalidException("Please provide a cluster configuration POJO in context");
validator.validate(clusterConfigurationProperties);
return new JedisConnectionFactory(new RedisClusterConfiguration(clusterConfigurationProperties.getNodes()));
}
#Bean
RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) throws ConfigInvalidException {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(connectionFactory());
redisTemplate.setKeySerializer( new StringRedisSerializer() );
redisTemplate.setHashValueSerializer( new GenericToStringSerializer<>( Object.class ) );
redisTemplate.setValueSerializer( new GenericToStringSerializer<>( Object.class ) );
return redisTemplate;
}
}
Here I am expecting a ClusterConfigurationProperties POJO as a bean in application which will be using the interface of wrapper.
But to compile my wrapper, I have created a null bean itself. Then when application uses it, there will be two beans, one of application and one of wrapper.
How should I resolve this problem?
Actually what i wanted was to have cluster config as a bean in my client application. For that i dont need to declare #autowire clusterconfig in my wrapper application. Instead should take cluster config as a parameter in the method, so that the client will pass cluster config object when creating bean. And the bean which is created in client code should have code for creating redis connection factory.
But all this i was writing was to make my client unknown of redis. So, better solution is to have wrapper class which takes cluster config pojo and create redis connection factory etc. And client should create this wrapper as a bean.
Very poor concept of spring and design patterns lead me to this mistake.

Null pointer exception using Autowired annotation - Gemfire Listerner

I have moved all the Cassandra into single class. When I tried create instance of CassandraOperations in the gemfire cache listener was getting null pointer exception.Can you please assist me on this error
I have not received any null pointer exception using spring and cassandra but getting while integrating with gemfire.
#Component
public class CacheListener<K, V> extends CacheListenerAdapter<K, V> implements Declarable {
#Autowired
private CassandraOperations cassandraOperations;
#Override
public void init(Properties props) {
}
public void afterCreate(EntryEvent e) {
cassandraOperations.insert(e.getNewValue());
}
#Override
public void close() {
}
}
public class CassandraConfig {
#Autowired
private Environment environment;
private static final Logger LOGGER = LoggerFactory.getLogger(CassandraConfig.class);
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(environment.getProperty("cassandra.contactpoints"));
cluster.setPort(Integer.parseInt(environment.getProperty("cassandra.port")));
return cluster;
}
#Bean
public CassandraMappingContext mappingContext() {
BasicCassandraMappingContext mappingContext = new BasicCassandraMappingContext();
mappingContext.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), environment.getProperty("cassandra.keyspace"))); return mappingContext;
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(environment.getProperty("cassandra.keyspace"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.NONE);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
Exception
[error 2017/05/05 11:16:04.874 CDT <http-nio-7878-exec-1> tid=0x5b] Exception occurred in CacheListener
java.lang.NullPointerException
at CacheListener.afterCreate(CacheListener.java:27)
at com.gemstone.gemfire.internal.cache.EnumListenerEvent$AFTER_CREATE.dispatchEvent(EnumListenerEvent.java:97)
at com.gemstone.gemfire.internal.cache.LocalRegion.dispatchEvent(LocalRegion.java:8897)
at com.gemstone.gemfire.internal.cache.LocalRegion.dispatchListenerEvent(LocalRegion.java:7376)
at com.gemstone.gemfire.internal.cache.LocalRegion.invokePutCallbacks(LocalRegion.java:6158)
at com.gemstone.gemfire.internal.cache.EntryEventImpl.invokeCallbacks(EntryEventImpl.java:1919)
at com.gemstone.gemfire.internal.cache.ProxyRegionMap$ProxyRegionEntry.dispatchListenerEvents(ProxyRegionMap.java:548)
at com.gemstone.gemfire.internal.cache.LocalRegion.basicPutPart2(LocalRegion.java:6012)
at com.gemstone.gemfire.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:232)
at com.gemstone.gemfire.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5824)
at com.gemstone.gemfire.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:118)
at com.gemstone.gemfire.internal.cache.LocalRegion.basicPut(LocalRegion.java:5214)
at com.gemstone.gemfire.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1597)
at com.gemstone.gemfire.internal.cache.LocalRegion.put(LocalRegion.java:1580)
at com.gemstone.gemfire.internal.cache.AbstractRegion.put(AbstractRegion.java:327)
at org.springframework.data.gemfire.GemfireTemplate.put(GemfireTemplate.java:189)
at org.springframework.data.gemfire.repository.support.SimpleGemfireRepository.save(SimpleGemfireRepository.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
What is not apparent in your code/configuration above is how you configured your application-specific, GemFire CacheListener using Spring (Data GemFire).
I see you annotated your application CacheListener using Spring's #Component stereo-type annotation, but this does nothing without help.
Are you using Spring's Classpath component scanning functionality, or perhaps Spring's Annotation-based container configuration support? If you are using the later, you know you have to still explicitly define your application CacheListener in config (JavaConfig or XML), right?
Whenever you encounter a NullPointerException on an #Autowired component/collaborator field to inject a dependency, especially when using Spring's #Autowired annotation, it is good indication you have a configuration problem, particularly since the #Autowired annotation implies that the "dependency" (e.g. CassandraOperations) is "required" (unless you explicitly set the required attribute of the #Autowired annotation to false, which you did not; required defaults to true).
Therefore, if the CacheListener component were picked up in the scan and a dependency could not be injected (auto-wired) because no (other) bean of the specified type (e.g. CassandraOperations) was defined in the Spring application context (which it is), then Spring would throw an Exception when evaluating your configuration class(es).
Although, even your CassandraConfig class must also be annotated with Spring's #Configuration annotation or with the #Component annotation when using either Spring Classpath component scanning or Annotation-based container config. Or, it must be explicitly defined as a bean in the Spring application context if using neither.
NOTE: the naming convention (i.e. CacheListener) is not very good since it clashes with GemFire's own CacheListener interface. It would be better to call your application-specific extension/implementation perhaps, "GemFireToCassandraCacheListener"
By way of example...
import ...;
#Configuration
class GemFireConfiguration {
#Bean
CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
#Bean("CassandraCache")
PartitionedRegionFactoryBean cassandraCacheRegion() {
PartitionedRegionFactoryBean cassandraCacheRegion =
new PartitionedRegionFactoryBean();
cassandraCacheRegion.setCache(gemfireCache());
cassandraCacheRegion.setClose(false);
cassandraCacheRegion.setCacheListeners(
new CacheListener[] { gemfireToCassandraCacheListener() });
return cassandraCacheRegion;
}
#Bean
GemFireToCassandraCacheListener gemfireToCassandraCacheListener() {
return new GemFireToCassandraCacheListener();
}
}
import ...;
#Configuration
class CassandraConfig {
// what you have above
}
I have plenty of GemFire configuration examples here, that shows GemFire native config with Spring (Data GemFire) config, XML vs. JavaConfig vs. annotations, etc, etc.
Finally...
Technically, it might be better to use a GemFire CacheWriter, attached to the Region, rather than a CacheListener, since what you are doing (updating Cassandra on a cache create) is the intended purpose of a CacheWriter.
Of course, the CacheListener is called "after" create vs. the CacheWriter which is "before" create. However, I would say it is always better to update the "primary" data source (or "source of truth") before updating the "cache" to reflect the data source. This is applicable especially if there are constraints in the primary data source that might cause an update to fail. You would not want the cache to be updated if the primary data source could not be.
A CacheWriter is configured similarly to a CacheListener, like so...
#Bean("CassandraCache")
PartitionedRegionFactoryBean cassandraCacheRegion() {
PartitionedRegionFactoryBean cassandraCacheRegion =
new PartitionedRegionFactoryBean();
cassandraCacheRegion.setCache(gemfireCache());
cassandraCacheRegion.setClose(false);
cassandraCacheRegion.setCacheWriter(gemfireToCassandraCacheWriter());
return cassandraCacheRegion;
}
#Bean
GemFireToCassandraCacheWriter gemfireToCassandraCacheWriter(
CassandraOperations cassandraOperations) {
return new GemFireToCassandraCacheWriter(cassandraOperations);
}
Where the GemFireToCassandraCacheWriter would be defined as...
class GemFireToCassandraCacheWriter extends CacheWriterAdapter {
private CassandraOperations cassandraOperations;
// Using constructor injection is better than field injection
GemFireToCassandraCacheWriter(CassandraOperations cassandraOperations) {
this.cassandraOperations = cassandraOperations;
}
public void beforeCreate(EntryEvent<?, ?> event) {
cassandraOperations.insert(event.getNewValue());
}
}
NOTE: a Region can only have 1 CacheWriter. FYI, functionally the CacheWriter is the counterpart to a CacheLoader. See the GemFire User Guide for more details. In particular, see here, here and here.
Additionally, if you are just using GemFire as a cache for state that is primarily managed in Cassandra, then you might also consider Spring's Cache Abstraction, for which Spring Data GemFire positions GemFire as a "provider" in the abstraction.
Not sure what your GemFire to Cassandra UC is all about, but food for thought.
Hope this helps!
-John

Spring Boot Data JPA: Hibernate Session issue

I'm developing a Spring Boot based web application. I heavily rely on #ComponentScan and #EnableAutoConfiguration and no explicit XML configuration in place.
I have the following problem. I have a JPA-Annotated Entity class called UserSettings:
#Entity public class UserSettings {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private long id;
#OneToMany(cascade = CascadeType.ALL)
private Set<Preference> preferences; // 'Preference' is another #Entity class
public UserSettings() {
this.preferences = new HashSet<Preference>();
}
// some more primitive properties, Getters, Setters...
}
I followed this tutorial and created a repository interface that extends JpaRepository<UserSettings,Long>.
Furthermore, I have a UserManager bean:
#Component public class SettingsManager {
#Autowired
UserSettingsRepository settingsRepository;
#PostConstruct
protected void init() {
// 'findGlobalSettings' is a simple custom HQL query
UserSettings globalSettings = this.settingsRepository.findGlobalSettings();
if (globalSettings == null) {
globalSettings = new UserSettings();
this.settingsRepository.saveAndFlush(globalSettings);
}
}
Later in the code, I load the UserSettings object created here, again with the findGlobalSetttings query.
The problem is: Every time I try to access the #OneToMany attribute of the settings object, I get the following exception:
org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role org.example.UserSettings.preferences, could not initialize proxy - no Session
I understand that each HTTP Session has its own Hibernate Session, as described in the accepted answer of this question, but that does not apply in my case (currently I'm testing this within the same HTTP Session), which is why I have no idea where this exception comes from.
What am I doing wrong and how can I achieve circumvent the error?
If you want to be able to access mapped entities outside the transaction (which you seem to be doing), you need to flag it as an "eager" join. i.e.
#OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER)
This question has been answered beautifully by #Steve. However, if you still want to maintain your lazy loading implementation, you may want to try this
import javax.servlet.Filter;
import org.springframework.boot.context.embedded.FilterRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.orm.hibernate4.support.OpenSessionInViewFilter;
#Configuration
#ComponentScan
public class AppConfig {
#Bean
public FilterRegistrationBean filterRegistration() {
FilterRegistrationBean registration = new FilterRegistrationBean();
registration.setFilter(openSessionInView());
registration.addUrlPatterns("/*");
return registration;
}
#Bean
public Filter openSessionInView() {
return new OpenSessionInViewFilter();
}
}
What this configuration does is, it registers a Filter on requests to path "/*" which keeps your Hibernate Session open in your view.
This is an anti-pattern and must be used with care.
NOTE: As of Spring Boot 1.3.5.RELEASE, when you use the default configuration with Spring Data JPA auto-configuration, you shouldn't encounter this problem
I faced similar issue in spring boot application, after googling I'm able to fix this issue by adding the following code to my application.
#Bean(name = "entityManagerFactory")
public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder builder, DataSource dataSource) {
LocalContainerEntityManagerFactoryBean entityManagerFactoryBean = new LocalContainerEntityManagerFactoryBean();
entityManagerFactoryBean.setDataSource(dataSource);
entityManagerFactoryBean.setJpaVendorAdapter(new HibernateJpaVendorAdapter());
Properties jpaProperties = new Properties();
jpaProperties.put("hibernate.enable_lazy_load_no_trans", true);
entityManagerFactoryBean.setJpaProperties(jpaProperties);
return entityManagerFactoryBean;
}
Referred here.

Resources