Adding Compression to Spring Data Redis with LettuceConnectionFactory - spring

I see Lettuce can do compression for Redis serialized objects: https://lettuce.io/core/release/reference/#codecs.compression
Any way to set this config within Spring Boot Data LettuceConnectionFactory or in some other bean? I've seen this question asked here as well: https://github.com/lettuce-io/lettuce-core/issues/633
I'd like to compress all serialized objects being sent to Redis to reduce network traffic between boxes.
Thanks

I ended up solving it the following way.
Create a RedisTemplate Spring Bean. This allows us to set a custom serializer.
#Bean
public RedisTemplate<Object, Object> redisTemplate(LettuceConnectionFactory connectionFactory) {
RedisTemplate<Object, Object> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
// Set a custom serializer that will compress/decompress data to/from redis
RedisSerializerGzip serializerGzip = new RedisSerializerGzip();
template.setValueSerializer(serializerGzip);
template.setHashValueSerializer(serializerGzip);
return template;
}
Create a custom serializer. I decided to extend JdkSerializationRedisSerializer since that's what Spring was using by default for Redis. I added a compress/decompress in each respected method and use the super class serialization code.
public class RedisSerializerGzip extends JdkSerializationRedisSerializer {
#Override
public Object deserialize(byte[] bytes) {
return super.deserialize(decompress(bytes));
}
#Override
public byte[] serialize(Object object) {
return compress(super.serialize(object));
}
////////////////////////
// Helpers
////////////////////////
private byte[] compress(byte[] content) {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
try (GZIPOutputStream gzipOutputStream = new GZIPOutputStream(byteArrayOutputStream)) {
gzipOutputStream.write(content);
} catch (IOException e) {
throw new SerializationException("Unable to compress data", e);
}
return byteArrayOutputStream.toByteArray();
}
private byte[] decompress(byte[] contentBytes) {
ByteArrayOutputStream out = new ByteArrayOutputStream();
try {
IOUtils.copy(new GZIPInputStream(new ByteArrayInputStream(contentBytes)), out);
} catch (IOException e) {
throw new SerializationException("Unable to decompress data", e);
}
return out.toByteArray();
}
}
Use spring dependency injection for the RedisTemplate Bean.

Here is a sample of a Java Spring Boot Redis Cluster Data configuration.
It is an implementation with Redis Cluster and Redis Cache Manager.
Snappy Compression
Kryo Serialization
Support ttl per cache key
Gradle configuration
spring-data-redis
snappy-java
kryo
commons-codec
Link to github https://github.com/cboursinos/java-spring-redis-compression-snappy-kryo

Related

Leverage Spring boot Redis Auto configure logic for RedisConnectionFactory

Spring boot auto configures RedisConnectionFactory if spring-data-redis exists on classpath and RedisConnectionFactory is initialized in LettuceConnectionConfiguration if Lettuce-core available on classpath.
I've only one Redis store as of now, so leveraging Spring boot auto configuration.
Now I'm adding two redis stores, one redis store used as default and other is used when specified with parameter cacheManager = "secondayCacheManager" in #Cacheable annotation so, application should've capability to cache/cache-get on both redis stores.
To configure both Redis Stores, we've to configure both the primary and secondary RedisConnectionFactory and cacheManager using custom configuration. (because spring doesn't auto configure RedisConnectionFactory if it already exists in any custom configuration)
Now the above is custom configuration and missing lot of logic that is happening while configuring RedisConnectionFactory in LettuceConnectionConfiguration.
Auto configure logic for LettuceConnectionConfiguration is package private so, cannot be called directly from custom configuration.
We would like to leverage the auto configure logic in
LettuceConnectionConfiguration while configuring the custom
RedisConnectionFactory for both primary and secondary redis caches.
Is there a way to achieve this?
Reason being we would like keep the redis connection configurations as it is done by spring boot auto configure.
Currently using below code to configure both the primary and secondary RedisConnectionFactory with Pool configuration and some code copy pasted from LettuceConnectionConfiguration class.
public static LettuceConnectionFactory buildLettuceConnectionFactory(RedisProperties properties, ClientResources clientResources) {
RedisStandaloneConfiguration standaloneConfiguration = new RedisStandaloneConfiguration(properties.getHost(), properties.getPort());
standaloneConfiguration.setDatabase(properties.getDatabase());
if (properties.getPassword() != null) {
standaloneConfiguration.setPassword(RedisPassword.of(properties.getPassword()));
}
if (properties.getUsername() != null) {
standaloneConfiguration.setUsername(properties.getUsername());
}
LettucePoolingClientConfiguration poolingClientConfiguration = LettucePoolingClientConfiguration.builder()
.poolConfig(buildGenericObjectPoolConfig(properties))
.shutdownTimeout(properties.getLettuce().getShutdownTimeout())
.clientOptions(createClientOptions(properties))
.clientResources(clientResources)
.build();
LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(
standaloneConfiguration, poolingClientConfiguration);
lettuceConnectionFactory.afterPropertiesSet();
return lettuceConnectionFactory;
}
private static GenericObjectPoolConfig buildGenericObjectPoolConfig(RedisProperties properties) {
RedisProperties.Pool pool = properties.getLettuce().getPool();
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
if (Objects.nonNull(pool)) {
poolConfig.setMaxIdle(pool.getMaxIdle());
poolConfig.setMinIdle(pool.getMinIdle());
poolConfig.setMaxTotal(pool.getMaxActive());
poolConfig.setMaxWaitMillis(pool.getMaxWait().toMillis());
}
return poolConfig;
}
private static ClientOptions createClientOptions(RedisProperties properties) {
ClientOptions.Builder builder = initializeClientOptionsBuilder(properties);
Duration connectTimeout = properties.getConnectTimeout();
if (connectTimeout != null) {
builder.socketOptions(SocketOptions.builder().connectTimeout(connectTimeout).build());
}
return builder.timeoutOptions(TimeoutOptions.enabled()).build();
}
private static ClientOptions.Builder initializeClientOptionsBuilder(RedisProperties properties) {
if (properties.getCluster() != null) {
ClusterClientOptions.Builder builder = ClusterClientOptions.builder();
Refresh refreshProperties = properties.getLettuce().getCluster().getRefresh();
Builder refreshBuilder = ClusterTopologyRefreshOptions.builder()
.dynamicRefreshSources(refreshProperties.isDynamicRefreshSources());
if (refreshProperties.getPeriod() != null) {
refreshBuilder.enablePeriodicRefresh(refreshProperties.getPeriod());
}
if (refreshProperties.isAdaptive()) {
refreshBuilder.enableAllAdaptiveRefreshTriggers();
}
return builder.topologyRefreshOptions(refreshBuilder.build());
}
return ClientOptions.builder();
}

ListenerExecutionFailedException Nullpointer when trying to index kafka payload through new ElasticSearch Java API Client

I'm migrating from the HLRC to the new client, things were smooth but for some reason I cannot index a specific class/document. Here is my client implementation and index request:
#Configuration
public class ClientConfiguration{
#Autowired
private InternalProperties conf;
public ElasticsearchClient sslClient(){
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(conf.getElasticsearchUser(), conf.getElasticsearchPassword()));
HttpHost httpHost = new HttpHost(conf.getElasticsearchAddress(), conf.getElasticsearchPort(), "https");
RestClientBuilder restClientBuilder = RestClient.builder(httpHost);
try {
SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(null, (x509Certificates, s) -> true).build();
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext)
.setDefaultCredentialsProvider(credentialsProvider);
}
});
} catch (Exception e) {
e.printStackTrace();
}
RestClient restClient=restClientBuilder.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
return client;
}
}
#Service
public class ThisDtoIndexClass extends ConfigAndProperties{
public ThisDtoIndexClass() {
}
//client is declared in the class it's extending from
public ThisDtoIndexClass(#Autowired ClientConfiguration esClient) {
this.client = esClient.sslClient();
}
#KafkaListener(topics = "esTopic")
public void in(#Payload(required = false) customDto doc)
throws ThisDtoIndexClassException, ElasticsearchException, IOException {
if(doc!= null && doc.getId() != null) {
IndexRequest.Builder<customDto > indexReqBuilder = new IndexRequest.Builder<>();
indexReqBuilder.index("index-for-this-Dto");
indexReqBuilder.id(doc.getId());
indexReqBuilder.document(doc);
IndexResponse response = client.index(indexReqBuilder.build());
} else {
throw new ThisDtoIndexClassException("document is null");
}
}
}
This is all done in spring boot (v2.6.8) with ES 7.17.3. According to the debug, the payload is NOT null! It even fetches the id correctly while stepping through. For some reason, it throws me a org.springframework.kafka.listener.ListenerExecutionFailedException: in the last line (during the .build?). Nothing gets indexed, but the response comes back 200. I'm lost on where I should be looking. I have a different class that also writes to a different index, also getting a payload from kafka directly (all seperate consumers). That one functions just fine.
I suspect it has something to do with the way my client is set up and/or the kafka. Please point me in the right direction.
I solved it by deleting the default constructor. If I put it back it overwrites the extended constructor (or straight up doesn't acknowledge the extended constructor), so my client was always null. The error message it gave me was extremely misleading since it actually wasn't the Kafka's fault!
Removing the default constructor completely initializes the correct constructor and I was able to index again. I assume this was a spring boot loading related "issue".

ReadOnlyKeyValueStore from a KTable using spring-kafka

I am migrating a Kafka Streams implementation which uses pure Kafka apis to use spring-kafka instead as it's incorporated in a spring-boot application.
Everything works fine the Stream, GlobalKTable, branching that I have all works perfectly fine but I am having a hard time incorporating a ReadOnlyKeyValueStore. Based on the spring-kafka documentation here: https://docs.spring.io/spring-kafka/docs/2.6.10/reference/html/#streams-spring
It says:
If you need to perform some KafkaStreams operations directly, you can
access that internal KafkaStreams instance by using
StreamsBuilderFactoryBean.getKafkaStreams(). You can autowire
StreamsBuilderFactoryBean bean by type, but you should be sure to use
the full type in the bean definition.
Based on that I tried to incorporate it to my example as in the following fragments below:
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public KafkaStreamsConfiguration defaultKafkaStreamsConfig() {
Map<String, Object> props = defaultStreamsConfigs();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "quote-stream");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "stock-quotes-stream-group");
return new KafkaStreamsConfiguration(props);
}
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_BUILDER_BEAN_NAME)
public StreamsBuilderFactoryBean defaultKafkaStreamsBuilder(KafkaStreamsConfiguration defaultKafkaStreamsConfig) {
return new StreamsBuilderFactoryBean(defaultKafkaStreamsConfig);
}
...
final GlobalKTable<String, LeveragePrice> leverageBySymbolGKTable = streamsBuilder
.globalTable(KafkaConfiguration.LEVERAGE_PRICE_TOPIC,
Materialized.<String, LeveragePrice, KeyValueStore<Bytes, byte[]>>as("leverage-by-symbol-table")
.withKeySerde(Serdes.String())
.withValueSerde(leveragePriceSerde));
leveragePriceView = myKStreamsBuilder.getKafkaStreams().store("leverage-by-symbol-table", QueryableStoreTypes.keyValueStore());
But adding the StreamsBuilderFactoryBean(which seems to be needed to get a reference to KafkaStreams) definition causes an error:
The bean 'defaultKafkaStreamsBuilder', defined in class path resource [com/resona/springkafkastream/repository/KafkaConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [org/springframework/kafka/annotation/KafkaStreamsDefaultConfiguration.class] and overriding is disabled.
The issue is I don't want to control the lifecycle of the stream that's what I get with the plain Kafka APIs so I would like to get a reference to the default managed one as I want spring to manage it but whenever I try to expose the bean it gives the error. Any ideas on what's the correct approach to that using spring-kafka?
P.S - I am not interested in solutions using spring-cloud-stream I am looking for implementations of spring-kafka.
You don't need to define any new beans; something like this should work...
spring.application.name=quote-stream
spring.kafka.streams.properties.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.kafka.streams.properties.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
#SpringBootApplication
#EnableKafkaStreams
public class So69669791Application {
public static void main(String[] args) {
SpringApplication.run(So69669791Application.class, args);
}
#Bean
GlobalKTable<String, String> leverageBySymbolGKTable(StreamsBuilder sb) {
return sb.globalTable("gkTopic",
Materialized.<String, String, KeyValueStore<Bytes, byte[]>> as("leverage-by-symbol-table"));
}
private ReadOnlyKeyValueStore<String, String> leveragePriceView;
#Bean
StreamsBuilderFactoryBean.Listener afterStart(StreamsBuilderFactoryBean sbfb,
GlobalKTable<String, String> leverageBySymbolGKTable) {
StreamsBuilderFactoryBean.Listener listener = new StreamsBuilderFactoryBean.Listener() {
#Override
public void streamsAdded(String id, KafkaStreams streams) {
leveragePriceView = streams.store("leverage-by-symbol-table", QueryableStoreTypes.keyValueStore());
}
};
sbfb.addListener(listener);
return listener;
}
#Bean
KStream<String, String> stream(StreamsBuilder builder) {
KStream<String, String> stream = builder.stream("someTopic");
stream.to("otherTopic");
return stream;
}
}

spring boot with redis

I worked with spring boot and redis to caching.I can cache my data that fetch from database(oracle) use #Cacheable(key = "{#input,#page,#size}",value = "on_test").
when i try to fetch data from key("on_test::0,0,10") with redisTemplate the result is 0
why??
Redis Config:
#Configuration
public class RedisConfig {
#Bean
JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration("localhost", 6379);
redisStandaloneConfiguration.setPassword(RedisPassword.of("admin#123"));
return new JedisConnectionFactory(redisStandaloneConfiguration);
}
#Bean
public RedisTemplate<String,Objects> redisTemplate() {
RedisTemplate<String,Objects> template = new RedisTemplate<>();
template.setStringSerializer(new StringRedisSerializer());
template.setValueSerializer(new StringRedisSerializer());
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
//service
#Override
#Cacheable(key = "{#input,#page,#size}",value = "on_test")
public Page<?> getAllByZikaConfirmedClinicIs(Integer input,int page,int size) {
try {
Pageable newPage = PageRequest.of(page, size);
String fromCache = controlledCacheService.getFromCache();
if (fromCache == null && input!=null) {
log.info("cache is empty lets initials it!!!");
Page<DataSet> all = dataSetRepository.getAllByZikaConfirmedClinicIs(input,newPage);
List<DataSet> d = redisTemplate.opsForHash().values("on_test::0,0,10");
System.out.print(d);
return all;
}
return null;
The whole point of using #Cacheable is that you don't need to be using RedisTemplate directly. You just need to call getAllByZikaConfirmedClinicIs() (from outside of the class it is defined in) and Spring will automatically check first if a cached result is available and return that instead of calling the function.
If that's not working, have you annotated one of your Spring Boot configuration classes with #EnableCaching to enable caching?
You might also need to set spring.cache.type=REDIS in application.properties, or spring.cache.type: REDIS in application.yml to ensure Spring is using Redis and not some other cache provider.

How to configure Spring cloud stream (kafka) to use protobuf as serialization

I am using Spring cloud stream (kafka) to exchange messages between producer and consumer microservices.
It exchanges data with native java serialization. As per Spring cloud documentation, It supports JSON,AVRO serialization.
Is any one tried protobuf serialization (message converter) in spring cloud stream
---------------- Later Added
I wrote this MessageConverter
public class ProtobufMessageConverter<T extends AbstractMessage > extends AbstractMessageConverter
{
private Parser<T> parser;
public ProtobufMessageConverter(Parser<T> parser)
{
super(new MimeType("application", "protobuf"));
this.parser = parser;
}
#Override
protected boolean supports(Class<?> clazz)
{
if (clazz != null)
{
return EquipmentProto.Equipment.class.isAssignableFrom(clazz);
}
return true;
}
#Override
public Object convertFromInternal(Message<?> message, Class<?> targetClass, Object conversionHint)
{
if (!(message.getPayload() instanceof byte[]))
{
return null;
}
try
{
// return EquipmentProto.Equipment.parseFrom((byte[]) message.getPayload());
return parser.parseFrom((byte[]) message.getPayload());
}
catch (Exception e)
{
this.logger.error(e.getMessage(), e);
}
return null;
}
#Override
protected Object convertToInternal(Object payload, MessageHeaders headers, Object conversionHint)
{
return ((AbstractMessage) payload).toByteArray();
}
}
It's really not a question of trying but rather just doing it, since converters are a natural extension mechanism (inherited fro spring-integration) in spring-cloud-stream that exists specifically to address these concerns. So yes, you can add your own custom converter.
Also, keep in mind that with Kafka there is also a concept of native serde, so you need to make sure that the two do not create some conflict.

Resources