Is it possible to set Redis key expiration time when using Spring Integration RedisMessageStore - spring-boot

Dears, I'd like to auto-expire Redis keys when using org.springframework.integration.redis.store.RedisMessageStore class in Spring Integration. I see some methods related to "expiry callback" but I could not find any documentation or examples yet. Any advice will be much appreciated.
#Bean
public MessageStore redisMessageStore(LettuceConnectionFactory redisConnectionFactory) {
RedisMessageStore store = new RedisMessageStore(redisConnectionFactory, "TEST_");
return store;
}
Spring Boot: 2.6.3, spring integration and spring-boot-starter-data-redis.

The RedisMessageStore does not have any expiration features. And technically it must not. The point of this kind of store is too keep data until it is used. Look at it as persistent storage. The RedisMetadataStore is based on the RedisProperties object, so it also cannot use expiration feature for particular entry.
You probably talk about a MessageGroupStoreReaper, which really calls a MessageGroupStore.expireMessageGroups(long timeout), but that's already an artificial, cross-store implementation provided by the framework. The logic relies on the group.getTimestamp() and group.getLastModified(). So, still not that auto-expiration Redis feature.
The MessageGroupStoreReaper is a process needed to be run in your application: nothing Redis-specific.
See more info in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#reaper

Related

Spring Data Gemfire: TTL expiration annotation is not working; how to set TTL using annotations?

In Gfsh, I was able to do: create region --name=employee --type=REPLICATE --enable-statistics=true --entry-time-to-live-expiration=900.
We have a requirement to create a Region using Java using the #EnableEntityDefinedRegions annotation. When I use describe in Gfsh the Regions are showing, but entity time to live expiration (TTL) is not setting by using below ways.
Any idea how to set TTL in Java?
Spring Boot 2.5.x and spring-gemfire-starter 1.2.13.RELEASE are used in the app.
#EnableStatistics
#EnableExpiration(policies = {
#EnableExpiration.ExpirationPolicy(regionNames = "Employee", timeout = 60, action = ExpirationActionType.DESTROY))
})
#EnableEntityDefinedRegions
public class BaseApplication {
....
#Region("Employee")
public class Employee {
or
#EnableStatistics
#EnableExpiration
public class BaseApplication {
----
#Region("Employee")
#TimeToLiveExpiration(timeout = "60", action = "DESTROY")
#Expiration(timeout = "60", action = "DESTROY")
public class Employee {
....
or
using bean creation way also not working, getting error "operation is not supported on a client cache"
#EnableEntityDefinedRegions
//#PeerCacheApplication for peer cache Region is not creating PCC gemfire
public class BaseApplication {
---
}
#Bean(name="employee")
PartitionedRegionFactoryBean<String, Employee> getEmployee
(final GemFireCache cache,
RegionAttributes<String, Employee> peopleRegionAttributes) {
PartitionedRegionFactoryBean<String, Employee> getEmployee = new PartitionedRegionFactoryBean<String, Employee>();
getEmployee.setCache(cache);
getEmployee.setAttributes(peopleRegionAttributes);
getEmployee.setCache(cache);
getEmployee.setClose(false);
getEmployee.setName("Employee");
getEmployee.setPersistent(false);
getEmployee.setDataPolicy( DataPolicy.PARTITION);
getEmployee.setStatisticsEnabled( true );
getEmployee.setEntryTimeToLive( new ExpirationAttributes(60) );
return getEmployee;
}
#Bean
#SuppressWarnings("unchecked")
RegionAttributesFactoryBean EmployeeAttributes() {
RegionAttributesFactoryBean EmployeeAttributes = new RegionAttributesFactoryBean();
EmployeeAttributes.setKeyConstraint( String.class );
EmployeeAttributes.setValueConstraint( Employee.class );
}
First, Spring Boot for Apache Geode (SBDG) 1.2.x is already EOL because Spring Boot 2.2.x is EOL (see details on support). SBDG follows Spring Boot's support lifecycle and policies.
Second, SBDG 1.2.x is based on Spring Boot 2.2.x. See the Version Compatibility Matrix for further details. We will not support mismatched dependency versions. While mismatched dependency versions may work in certain cases (mileage varies depending on your use case), the version combinations not explicitly stated in the Version Compatibility Matrix will not be supported none-the-less. Also see the documentation on this matter.
Now, regarding your problem with TTL Region entry expiration policies...
SBDG auto-configuration creates an Apache Geode ClientCache instance by default (see docs). You cannot create a PARTITION Region using a ClientCache instance.
If your Spring Boot application is intended to be a peer Cache instance in an Apache Geode cluster (server-side), then you must explicitly declare your intention by overriding SBDG's auto-configuration, like so:
#PeerCacheApplication
#SpringBootApplication
class MySpringBootApacheGeodePeerCacheApplication {
// ...
}
TIP: See the documentation on creating peer Cache applications using SBDG.
Keep in mind that when you override SBDG's auto-configuration, then you may necessarily and implicitly be responsible for other aspects of Apache Geode's configuration, e.g. Security! Heed the warning.
On the other hand, if your intent is to truly enable your Spring Boot/SBDG application as a cache "client" (i.e. a ClientCache instance, the default), then TTL Region entry expiration policies do not make sense on client PROXY Regions, which is the default DataPolicy (EMPTY) for client Regions when using the Spring Data for Apache Geode (SDG) #EnableEntityDefinedRegions annotation (see Javadoc). This is because Apache Geode client PROXY Regions do not store any data locally. All data access operations are forward to the server/cluster.
Even if you alter the configuration to use client CACHING_PROXY Regions, the TTL Region expiration policies will only take effect locally. You must configure your corresponding server/cluster Regions, separately (e.g. using Gfsh).
Also, even though you can push cluster configuration from the client using SDG's #EnableClusterConfiguration (doc, Javadoc), or alternatively and preferably, SBDG's #EnableClusterAware annotation (doc, Javadoc; which is meta-annotated with SDG's #EnableClusterConfiguation), this functionality only pushes Region and Index configuration to the cluster, not expiration policies.
See the SBDG documentation on expiration for further details. This doc also leads to SDG's documentation on expiration, and specifically Annotation-based expiration configuration.
I see that the SBDG docs are not real clear on the matter of expiration, so I have filed an Issue ticket in SBDG to make this more clear.

Multitenancy support when using Apache Camel and Hibernate (in a Spring application)

I have a Spring application which uses the Hibernate schema strategy and a TenantContext class to store the tenant identifier (same design shown here: https://vladmihalcea.com/hibernate-database-schema-multitenancy/).
Everything works fine when dealing with synchronous HTTP requests handled by Spring.
Besides that, I have some Camel routes which are triggered by chron jobs. They use the JPA component to read from or write to a datasource. The Exchange object knows the tenant identifier. How to transfer that information to Hibernate?
I was thinking about using a listener or interceptor to get the tenant id from the Exchange and set the TenantContext object at every step in the route. The TenantContext will then be used by the Hibernate CurrentTenantIdentifierResolver class to resolve the tenant.
How should the TenantContext look like? Is it ThreadLocal a viable option? What about async threads?
In general, do you have any good solution to support Hibernate multitenancy when using Camel?
In general, it depends on your use case ;)
But we have done something similar, using a ThreadLocal in the CurrentTenantIdentifierResolver.
Then in the places where we need to set the tenant (e.g. at the start of the job being executed by the cronjob, in your case), we have an instance of tenantIdentifierResolver used like so:
tenantIdentifierResolver.withTenantId(tenant, () -> {
try {
doWhatever(param1, param2, etc);
} catch (WhateverException we) {
throw new MyBusinessException(we);
}
});

Axon 4 XStream configuration

When running my Spring Boot app which includes Axon 4 I see the following in my output console:
Security framework of XStream not initialized, XStream is probably vulnerable.
How do I go about securing the XStream included in Axon 4?
For clarification, I am speaking about how to configure the XStream that Axon 4 uses. I am not certain if this should be done in the YAML file or in one of the Configuration classes. Every where I have tried the information detailed in this answer does not affect the XStream configuration and I still get the same warning.
Update:
Based on the answers below, this question seems to be two fold. Thanks to the answers below I managed to get this working as follows (based on information posted at this answer):
//AxonConfig.java
#Bean
XStream xstream(){
XStream xstream = new XStream();
// clear out existing permissions and set own ones
xstream.addPermission(NoTypePermission.NONE);
// allow any type from the same package
xstream.allowTypesByWildcard(new String[] {
"com.ourpackages.**",
"org.axonframework.**",
"java.**",
"com.thoughtworks.xstream.**"
});
return xstream;
}
#Bean
#Primary
public Serializer serializer(XStream xStream) {
return XStreamSerializer.builder().xStream(xStream).build();
}
I didn't want to answer my own question as I think Jan got the correct answer combined with Steven pointing to the Spring Boot config.
I am certain I will need to whittle away at the package scopes and will do so in due course. Thanks Jan and Steven for your assistance.
This is not Axon specific, check this question for background and solution: Security framework of XStream not initialized, XStream is probably vulnerable
Jan Galinski is right in that this isn't an Axon specific issue per say. More so a shift within the XStream package. Regardless, the link Jan shares is very valuable.
From there, you can create your own XStream object, instead of using the one the XStreamSerializer creates for you when utilizing Axon. You can then feed that object to the builder() of the XStreamSerializer.
As you are using Spring Boot too, simply having a bean creation function like so would suffice:
// The XStream should be configured in such a way that a security solution is provided
#Bean
public Serializer serializer(XStream xStream) {
return XStreamSerializer.builder().xStream(xStream).build();
}
Hope this helps!

Couchbase modify meta data experation using Spring

When using Spring Couchbase connector I can easily get version for optimistic locking by having this in my class:
public class MyClass {
#Version
private String version;
.... rest of class omitted ....
}
I'm now trying to find a similar way to get and be able to modify the meta data for expiration. I'm unable to find how to do this.
Can someone please give an example? Thanks!
With spring data couchbase library (until the latest Version 3.0.8.RELEASE), document expiry can be defined by using #Document(expiry = 10) or #Document(expiryExpression = "${valid.document.expiry}") on the class. There is also an optional boolean attribute touchOnRead which needs to be added with #Document, which would reset the expiry timer whenever the document is directly read. Please note that currently the expiry of an existing document cannot be read/modified directly with this library. One way would be to access the below APIs exposed by Couchbase's own java SDK (com.couchbase.client.java)
getAndTouch - allows you to retrieve a document while modifying its expiration time
touch - allows you to modify a document’s expiration time without otherwise accessing the document
You can find the method signatures of the above two here : http://docs.couchbase.com/sdk-api/couchbase-java-client-2.2.4/com/couchbase/client/java/Bucket.html
The above two APIs can be accessed via the spring data couchbase library as follows
couchbaseTemplate.getCouchbaseBucket().touch(...)
couchbaseTemplate.getCouchbaseBucket().getAndTouch(...)
The getCouchbaseBucket() method of the spring library returns a reference to com.couchbase.client.java.Bucket using which the touch and getAndTouch methods can be used.

Spring Design By Contract: where to start?

I am trying to put a "Contract" on a method call. My web application is in Spring 3.
Is writing customs Annotations the right way to go. If so, any pointers( I didn't find anything in spring reference docs).
Should I use tools like "Modern Jass", JML ...? Again any pointers will be useful.
Thanks
Using Spring EL and Spring security could get you most of the way. Spring security defines the #PreAuthorize annotation which is fired before method invocation and allows you to use Spring 3's new expression engine, such as:
#PreAuthorize("#customerId > 0")
public Customer getCustomer(int customerId) { .. }
or far more advanced rules like the following which ensures that the passed user does not have role ADMIN.
#PreAuthorize("#user.role != T(com.company.Role).ADMIN)")
public void saveUser(User user) { .. }
You can also provide default values for your contract with the #Value annotation
public Customer getCustomer(#Value("#{434}") int customerId) { .. }
You can even reference system properties in your value expressions.
Setting up Spring security for this purpose is not to hard as you can just create a UserDetailsService that grants some default role to all users. Alternatively you could make you own custom Spring aspect and then let this use the SpelExpressionParser to check method values.
if you don't mind writing some parts of your Java web application in Groovy (which is possible with Spring) I would suggest using GContracts.
GContracts is a Design by Contract (tm) library entirely written in Java - without any dependencies to other libraries - and has full support for class invariants, pre- and postconditions and inheritance of those assertions.
Contracts for Java which is based on Modern Jass is one way to write contracts.
http://code.google.com/p/cofoja/
As per the writing of this reply, this is pretty basic. Hopefully this will improve as we go on.
I didn't find an ideal solution to this, interestingly it is a planned feature for the Spring framework (2.0 implemented patch):
http://jira.springframework.org/browse/SPR-2698
The best thing I suggest to use JSR 303 which is for bean validation. AFAIK there are two implementations for this:
Agimatec Validations
Hibernate Validator
There's a guide here for integrating it into Spring, I haven't followed it through but it looks ok:
http://blog.jteam.nl/2009/08/04/bean-validation-integrating-jsr-303-with-spring/
I personally recommend C4J for 2 reasons:
It has Eclipse plugin so you don't need to manually configure it.
The documentation is written in a clear, structured format so you can easily use it.
Her's the link to C4J

Resources