Is spring-session compatible with hazelcast-wm? - spring

I am using hazelcast 3.8.4 in my web application to store some custom data in the hazelcast cluster.
On top of that, I use #EnableHazelcastHttpSession annotation from spring-session version 1.3.1 that makes hazelcast a default httpsession storage (and allows for the http session replication in the cluster).
I noticed that the whole shebang works by passing "SESSION" cookie. By default that cookie has "path" attribute which is equal to the context path of the application.
So I tried to find a way to modify that "path" attribute. All of the hazelcast resources sent me to hazelcast-wm project which allows for that path attribute customization. But the more I look at hazelcast-wm the more I start to think that it's doing the same job as spring-session, am I right? Will I need to drop spring-session and replace it with hazelcast-wm? Is there a way to modify "path" attribute in the spring-session?
Thanks in advance.

I think I found the answer, I can change the path attribute by manually creating an instance of DefaultCookieSerializer, which later will be autowired into spring-session pipeline:
#Bean
public CookieSerializer cookieSerializer() {
DefaultCookieSerializer serializer = new DefaultCookieSerializer();
serializer.setCookiePath("/");
return serializer;
}

Related

Spring Data Gemfire: TTL expiration annotation is not working; how to set TTL using annotations?

In Gfsh, I was able to do: create region --name=employee --type=REPLICATE --enable-statistics=true --entry-time-to-live-expiration=900.
We have a requirement to create a Region using Java using the #EnableEntityDefinedRegions annotation. When I use describe in Gfsh the Regions are showing, but entity time to live expiration (TTL) is not setting by using below ways.
Any idea how to set TTL in Java?
Spring Boot 2.5.x and spring-gemfire-starter 1.2.13.RELEASE are used in the app.
#EnableStatistics
#EnableExpiration(policies = {
#EnableExpiration.ExpirationPolicy(regionNames = "Employee", timeout = 60, action = ExpirationActionType.DESTROY))
})
#EnableEntityDefinedRegions
public class BaseApplication {
....
#Region("Employee")
public class Employee {
or
#EnableStatistics
#EnableExpiration
public class BaseApplication {
----
#Region("Employee")
#TimeToLiveExpiration(timeout = "60", action = "DESTROY")
#Expiration(timeout = "60", action = "DESTROY")
public class Employee {
....
or
using bean creation way also not working, getting error "operation is not supported on a client cache"
#EnableEntityDefinedRegions
//#PeerCacheApplication for peer cache Region is not creating PCC gemfire
public class BaseApplication {
---
}
#Bean(name="employee")
PartitionedRegionFactoryBean<String, Employee> getEmployee
(final GemFireCache cache,
RegionAttributes<String, Employee> peopleRegionAttributes) {
PartitionedRegionFactoryBean<String, Employee> getEmployee = new PartitionedRegionFactoryBean<String, Employee>();
getEmployee.setCache(cache);
getEmployee.setAttributes(peopleRegionAttributes);
getEmployee.setCache(cache);
getEmployee.setClose(false);
getEmployee.setName("Employee");
getEmployee.setPersistent(false);
getEmployee.setDataPolicy( DataPolicy.PARTITION);
getEmployee.setStatisticsEnabled( true );
getEmployee.setEntryTimeToLive( new ExpirationAttributes(60) );
return getEmployee;
}
#Bean
#SuppressWarnings("unchecked")
RegionAttributesFactoryBean EmployeeAttributes() {
RegionAttributesFactoryBean EmployeeAttributes = new RegionAttributesFactoryBean();
EmployeeAttributes.setKeyConstraint( String.class );
EmployeeAttributes.setValueConstraint( Employee.class );
}
First, Spring Boot for Apache Geode (SBDG) 1.2.x is already EOL because Spring Boot 2.2.x is EOL (see details on support). SBDG follows Spring Boot's support lifecycle and policies.
Second, SBDG 1.2.x is based on Spring Boot 2.2.x. See the Version Compatibility Matrix for further details. We will not support mismatched dependency versions. While mismatched dependency versions may work in certain cases (mileage varies depending on your use case), the version combinations not explicitly stated in the Version Compatibility Matrix will not be supported none-the-less. Also see the documentation on this matter.
Now, regarding your problem with TTL Region entry expiration policies...
SBDG auto-configuration creates an Apache Geode ClientCache instance by default (see docs). You cannot create a PARTITION Region using a ClientCache instance.
If your Spring Boot application is intended to be a peer Cache instance in an Apache Geode cluster (server-side), then you must explicitly declare your intention by overriding SBDG's auto-configuration, like so:
#PeerCacheApplication
#SpringBootApplication
class MySpringBootApacheGeodePeerCacheApplication {
// ...
}
TIP: See the documentation on creating peer Cache applications using SBDG.
Keep in mind that when you override SBDG's auto-configuration, then you may necessarily and implicitly be responsible for other aspects of Apache Geode's configuration, e.g. Security! Heed the warning.
On the other hand, if your intent is to truly enable your Spring Boot/SBDG application as a cache "client" (i.e. a ClientCache instance, the default), then TTL Region entry expiration policies do not make sense on client PROXY Regions, which is the default DataPolicy (EMPTY) for client Regions when using the Spring Data for Apache Geode (SDG) #EnableEntityDefinedRegions annotation (see Javadoc). This is because Apache Geode client PROXY Regions do not store any data locally. All data access operations are forward to the server/cluster.
Even if you alter the configuration to use client CACHING_PROXY Regions, the TTL Region expiration policies will only take effect locally. You must configure your corresponding server/cluster Regions, separately (e.g. using Gfsh).
Also, even though you can push cluster configuration from the client using SDG's #EnableClusterConfiguration (doc, Javadoc), or alternatively and preferably, SBDG's #EnableClusterAware annotation (doc, Javadoc; which is meta-annotated with SDG's #EnableClusterConfiguation), this functionality only pushes Region and Index configuration to the cluster, not expiration policies.
See the SBDG documentation on expiration for further details. This doc also leads to SDG's documentation on expiration, and specifically Annotation-based expiration configuration.
I see that the SBDG docs are not real clear on the matter of expiration, so I have filed an Issue ticket in SBDG to make this more clear.

Redis cache metrics with Prometheus(Spring boot)

I am using RedisTemplate for caching purpose in my spring boot service. Now I want to check cache hit/cache miss through end point actuator/prometheus. But can not see cache hit/cache miss for the cache.
The code I have written is something like below
#EnableCaching
#Configuration
public class CachingConfiguration {
#Bean
public RedisTemplate<String, SomeData> redisTemplate(LettuceConnectionFactory connectionFactory, ObjectMapper objectMapper)
{
RedisTemplate<String, SomeData> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
var valueSerializer = new Jackson2JsonRedisSerializer<SomeData>(SomeData.class);
valueSerializer.setObjectMapper(objectMapper);
template.setValueSerializer(valueSerializer);
return template;
}
}
Now am doing like below to get and save into cache
to get:-
redisTemplate.opsForValue().get(key);
And to save:-
redisTemplate.opsForValue().set(key, obj, some_time_limit);
My cache is working properly, am getting able to save into cache and getting proper data.
But I don't see cache hit/miss related data inside actuator/prometheus.
In my application.yml file I have added below
cache:
redis:
enable-statistics: 'true'
I would assume that in order for Springboot Cache Monitoring to apply (Including Hits/Misses), you would need to depend on AutoConfiguration.
In your case you are creating the RedisTemplate yourself, and probably enable-statistics is not actually applied.
Can you remove the redistemplate creation and use #Cacheable annotation abstraction? That way any supported Cache library will work out of the box, without you having to create #Bean and manually configuring it.
Otherwise, generally if you wanted to enable statistics on a cache manager manually, you will need to call RedisCacheManager.RedisCacheManagerBuilder enableStatistics():
https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/cache/RedisCacheManager.RedisCacheManagerBuilder.html
For Reference:
Auto-configuration enables the instrumentation of all available Cache
instances on startup, with metrics prefixed with cache. Cache
instrumentation is standardized for a basic set of metrics.
Additional, cache-specific metrics are also available.
Metrics are tagged by the name of the cache and by the name of the
CacheManager, which is derived from the bean name.
Only caches that are configured on startup are bound to the registry. For caches not
defined in the cache’s configuration, such as caches created on the
fly or programmatically after the startup phase, an explicit
registration is required. A CacheMetricsRegistrar bean is made
available to make that process easier.
I had exactly the same question and spent a good number of hours trying to figure out how to enable cache metrics for my manually created RedisTemplate instance.
What I eventually realised is that it's only RedisCache class which collects and exposes CacheStatistics through getStatistics() method. As far as I can see there is nothing like that for RedisTemplate, which means you either need to switch to using RedisCache through RedisCacheManager and #Cacheable annotation or implement your custom metrics collection.

Session cookie custom path

I have an spring boot application and want to deploy it to wildfly12. What I'm trying to achieve is that to set a custom path for JSESSIONID cookie. But after all, my efforts haven't had any results.
I have tried to use this property in my application.properties file:
server.servlet.session.cookie.path=/
When I run the application with the embedded tomcat, everything works fine; But when I deploy my app to wildfly, regardless of the value of that property, it always sets the cookie path to the "context-path" of the application.
I have also tried to use this property also:
server.servlet.context-path=/
but no success so far!
There is also this tag inside the standalone.xml file:
<session-cookie http-only="true" secure="true"/>
but it seems that it has nothing to do with the cookie path, as it doesn't have any property regarding that.
The configuration you are doing is for the embedded server of spring boot application.
Embedded server settings present in application properties (can be check here the section # EMBEDDED SERVER CONFIGURATION and the namespace server.servlet.session.cookie.*).
To modify cookie related configuration on external servers, you have to create CookieSerializer bean which can be used to customize cookie configuration. e.g.
#Bean
public CookieSerializer cookieSerializer() {
DefaultCookieSerializer serializer = new DefaultCookieSerializer();
serializer.setCookieName("JSESSIONID");
serializer.setCookiePath("/");
serializer.setDomainNamePattern("^.+?\\.(\\w+\\.[a-z]+)$");
return serializer;
}
You can refer spring guide for more information.

Spring boot application properties load process change programatically to improve security

I have spring boot micro-service with database credentials define in the application properties.
spring.datasource.url=<<url>>
spring.datasource.username=<<username>>
spring.datasource.password=<<password>>
We do not use spring data source to create the connection manually. Only Spring create the database connection with JPA.(org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration)
We only provide the application properties, but spring create the connections automatically to use with the database connection pool.
Our requirement to enhance the security without using db properties in clear text. Two possible methods.
Encrypt the database credentials
Use the AWS secret manager. (then get the credential with the application load)
For the option1, jasypt can be used, since we are just providing the properties only and do not want to create the data source manually, how to do to understand by the spring framework is the problem. If better I can get some working sample or methods.
Regarding the option-2,
first we need to define secretName.
use the secertName and get the database credentials from AWS secret manager.
update the application.properties programatically to understand by spring framework. (I need to know this step)
I need to use either option1 and option2. Mentioned the issues with each option.
What you could do is use environment variables for your properties. You can use them like this:
spring.datasource.url=${SECRET_URL}
You could then retrieve these and start your Spring process using a ProcessBuilder. (Or set the variables any other way)
I have found the solution for my problem.
We need to define org.springframework.context.ApplicationListenerin spring.factories file. It should define the required application context listener like below.
org.springframework.context.ApplicationListener=com.sample.PropsLoader
PropsLoader class is like this.
public class PropsLoader implements ApplicationListener<ApplicationEnvironmentPreparedEvent> {
#Override
public void onApplicationEvent(ApplicationEnvironmentPreparedEvent event) {
ConfigurableEnvironment environment = event.getEnvironment();
String appEnv = environment.getProperty("application.env");
//set new properties based on the application environment.
// calling other methods and depends on the enviornment and get the required value set
Properties props = new Properties();
props.put("new_property", "value");
environment.getPropertySources().addFirst(new PropertiesPropertySource("props", props));
}
}
spring.factories file should define under the resources package and META-INF
folder.
This will set the application context with new properties before loading any other beans.

Mule connector config needs dynamic attributes

I have develop a new Connector. This connector requires to be configured with two parameters, lets say:
default_trip_timeout_milis
default_trip_threshold
Challenge is, I want read ${myValue_a} and ${myValue_a} from an API, using an HTTP call, not from a file or inline values.
Since this is a connector, I need to make this API call somewhere before connectors are initialized.
FlowVars aren't an option, since they are initialized with the Flows, and this is happening before in the Mule app life Cycle.
My idea is to create an Spring Bean implementing Initialisable, so it will be called before Connectors are init, and here, using any java based libs (Spring RestTemplate?) , call API, get values, and store them somewhere (context? objectStore?) , so the connector can access them.
Make sense? Any other ideas?
Thanks!
mmm you could make a class that will create the properties in the startup and in this class obtain the API properties via http request. Example below:
public class PropertyInit implements InitializingBean,FactoryBean {
private Properties props = new Properties();
#Override
public Object getObject() throws Exception {
return props;
}
#Override
public Class getObjectType() {
return Properties.class;
}
}
Now you should be able to load this property class with:
<context:property-placeholder properties-ref="propertyInit"/>
Hope you like this idea. I used this approach in a previous project.
I want to give you first a strong warning on doing this. If you go down this path then you risk breaking your application in very strange ways because if any other components depend on this component you are having dynamic components on startup, you will break them, and you should think if there are other ways to achieve this behaviour instead of using properties.
That said the way to do this would be to use a proxy pattern, which is a proxy for the component you recreate whenever its properties are changed. So you will need to create a class which extends Circuit Breaker, which encapsulates and instance of Circuit Breaker which is recreated whenever its properties change. These properties must not be used outside of the proxy class as other components may read these properties at startup and then not refresh, you must keep this in mind that anything which might directly or indirectly access these properties cannot do so in their initialisation phase or your application will break.
It's worth taking a look at SpringCloudConfig which allows for you to have a properties server and then all your applications can hot-reload those properties at runtime when they change. Not sure if you can take that path in Mule if SpringCloud is supported yet but it's a nice thing to know exists.

Resources