Imagine you creating a Spring library which provides a service component for some remote service. The service component wants to cache response data internally, for which Spring caching is a very good fit. Also imagine the cache needs to be slightly more advanced than any of the default ones (timeouts, maximum sizes, etc), so the library provides a cache manager to create it. However, you don't want the third-party cache manager to suddenly be responsible for all caching used in the project where the library is included (the project might have it's own caches).
The behaviour I am observing is that if the project is using caching configured using a simple application.properties (with let's say ehcache - see example below), the cache manager provided by the component gets called to create all caches, no matter how I structure the code. Is this happening because the project haven't provided any cache manager of it's own?
Is it not possible to have caching like this in a library-provided service without the project being involved? It's very important for the use-case that the library can provide the cache without interfering with project caching.
Sample service cache configuration:
#Configuration
public class SpringAceClientCacheConfiguration
{
#Bean
public CacheManager serviceCacheManager()
{
return new GuavaCacheManager("service-data") {
#Override
public Cache getCache(String name) {
Logger.getLogger(getClass().getName()).info("Creating new cache for " + name + "...");
return new GuavaCache(name, CacheBuilder.newBuilder().build());
}
};
}
}
Sample project application.properties:
spring.cache.jcache.config=ehcache3.xml
Sample project ehcache3.xml:
<config xmlns='http://www.ehcache.org/v3'
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:jsr107="http://www.ehcache.org/v3/jsr107"
xsi:schemaLocation="http://www.ehcache.org/v3 http://www.ehcache.org/schema/ehcache-core-3.0.xsd
http://www.ehcache.org/v3/jsr107 http://www.ehcache.org/schema/ehcache-107-ext-3.0.xsd">
<cache alias="content">
<heap unit="entries">4096</heap>
<jsr107:mbeans enable-statistics="true"/>
</cache>
</config>
I see the logging call for both the 'service-data' and the 'content' caches. The service cache configuration only cares about it's own service data cache. Should I not be able to provide a cache manager just for this cache without the project having to declare a separate one with #Primary (which I believe might work, but I haven't tried it yet)?
Thanks for any help!
There is no immediate notion of "qualified" CacheManager so when you use the annotation model, you are expected to provide one CacheManager bean that is managing your complete cache infrastructure.
However, the cache abstraction has a CacheResolver SPI. So, your third party lib could require one:
#Cacheable(cacheResolver = "requiredCacheResolver")
public Foo someMethod(String id)
Then the requiredCacheResolver bean can get the cache information from whatever CacheManager you like. The third party lib could provide an implementation that takes a CacheManager or individual caches as a parameter.
I am not sure that I would recommend that, however. If you are using caching with a third party library, you should define in your documentation the caches that you require (name and semantic so that expiration can be configured accordingly). Then, each user of your library should configure those caches in their infrastructure. In the end, you'll have to do that anyway and hiding that from the user does not seem a good idea.
Related
I am using RedisTemplate for caching purpose in my spring boot service. Now I want to check cache hit/cache miss through end point actuator/prometheus. But can not see cache hit/cache miss for the cache.
The code I have written is something like below
#EnableCaching
#Configuration
public class CachingConfiguration {
#Bean
public RedisTemplate<String, SomeData> redisTemplate(LettuceConnectionFactory connectionFactory, ObjectMapper objectMapper)
{
RedisTemplate<String, SomeData> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
var valueSerializer = new Jackson2JsonRedisSerializer<SomeData>(SomeData.class);
valueSerializer.setObjectMapper(objectMapper);
template.setValueSerializer(valueSerializer);
return template;
}
}
Now am doing like below to get and save into cache
to get:-
redisTemplate.opsForValue().get(key);
And to save:-
redisTemplate.opsForValue().set(key, obj, some_time_limit);
My cache is working properly, am getting able to save into cache and getting proper data.
But I don't see cache hit/miss related data inside actuator/prometheus.
In my application.yml file I have added below
cache:
redis:
enable-statistics: 'true'
I would assume that in order for Springboot Cache Monitoring to apply (Including Hits/Misses), you would need to depend on AutoConfiguration.
In your case you are creating the RedisTemplate yourself, and probably enable-statistics is not actually applied.
Can you remove the redistemplate creation and use #Cacheable annotation abstraction? That way any supported Cache library will work out of the box, without you having to create #Bean and manually configuring it.
Otherwise, generally if you wanted to enable statistics on a cache manager manually, you will need to call RedisCacheManager.RedisCacheManagerBuilder enableStatistics():
https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/cache/RedisCacheManager.RedisCacheManagerBuilder.html
For Reference:
Auto-configuration enables the instrumentation of all available Cache
instances on startup, with metrics prefixed with cache. Cache
instrumentation is standardized for a basic set of metrics.
Additional, cache-specific metrics are also available.
Metrics are tagged by the name of the cache and by the name of the
CacheManager, which is derived from the bean name.
Only caches that are configured on startup are bound to the registry. For caches not
defined in the cache’s configuration, such as caches created on the
fly or programmatically after the startup phase, an explicit
registration is required. A CacheMetricsRegistrar bean is made
available to make that process easier.
I had exactly the same question and spent a good number of hours trying to figure out how to enable cache metrics for my manually created RedisTemplate instance.
What I eventually realised is that it's only RedisCache class which collects and exposes CacheStatistics through getStatistics() method. As far as I can see there is nothing like that for RedisTemplate, which means you either need to switch to using RedisCache through RedisCacheManager and #Cacheable annotation or implement your custom metrics collection.
My retail application has various contexts like receive, transfer etc. The requests to these contexts are handled by RESTful microservices developed using Spring Boot. The persistence layer is Cassandra. This is shared by all services as we couldn't do a vertical scaling for microservices at DB level as the services are tightly coupled conceptually.
We want vertical scaling at GemFire end by creating different Regions for different contexts.
For example, a BOX table in Cassandra will be updated by Region Box-Receive(receive context) and Region Box-Transfer(transfer context) via CacheWriter.
Our problem is how to maintain data sync between these two Regions?
Please suggest any other approach also for separation at GemFire end.
gemfire version-
<dependency>
<groupId>com.gemstone.gemfire</groupId>
<artifactId>gemfire</artifactId>
<version>8.2.6</version>
</dependency>
One alternative approach, since you are using Spring Boot would be to do the following:
First annotation your #SpringBootApplication class with #EnableGemfireCacheTransactions...
Example:
#SpringBootApplication
#EnableGemfireCacheTransactions
#EnableGemfireRepositories
class YourSpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(YourSpringBootApplication.class, args);
}
...
}
The #EnableGemfireCacheTransactions annotation enables Spring Data GemFire's GemfireTransactionManager, which integrates GemFire's CacheTransactionManager with Spring Transaction Management infrastructure which then allows you to do this...
Now, just annotate your #Service application component transactional service methods with core Spring's #Transactional annotation, like so...
#Service
class YourBoxReceiverTransferService {
#Transactional
public <return-type> update(ReceiveContext receiveContext,
TransferContext transferContext {
...
receiveContextRepository.save(receiveContext);
transferContextRepository.save(transferContext);
...
}
}
As you can see here, I also used Spring Data (GemFire's) Repository infrastructure to manage the persistence operations (e.g. CRUD), which will be used appropriately in the transactional scoped-context setup by Spring.
2 advantages with the Spring approach, over using GemFire's public API, which unnecessarily couples you to GemFire (a definite code smell, particularly in a Spring context), is...
You don't have to place a bunch of boilerplate, crap code in to your application components, which does not belong there!
Using Spring's Transaction Management infrastructure, it is extremely easy to change your Transaction Management Strategy, such as by switching from GemFire's local-only cache transactions, to say, Global, JTA-based Transactions if the need every arises (such as, oh, well, now I need to send a message over a JMS message queue after the GemFire Region's and Cassandra BOX Table are updated to notify some downstream process that the Receiver/Transfer context has been updated). With Spring's Transaction Management infrastructure, you do not need to change a single line of application code to change transaction management strategies (such as local to global, or global to local, etc).
Hope this helps!
-John
You can use transactions. Something like this should work:
txMgr = cache.getTransactionManager();
txMgr.begin();
boxReceive.put();
...
boxtransfer.put();
txMgr.commit();
This will work provided you co-locate the box-receive and the box-transfer region and use the same key, or use a PartitionResolver to colocate the data.
I have develop a new Connector. This connector requires to be configured with two parameters, lets say:
default_trip_timeout_milis
default_trip_threshold
Challenge is, I want read ${myValue_a} and ${myValue_a} from an API, using an HTTP call, not from a file or inline values.
Since this is a connector, I need to make this API call somewhere before connectors are initialized.
FlowVars aren't an option, since they are initialized with the Flows, and this is happening before in the Mule app life Cycle.
My idea is to create an Spring Bean implementing Initialisable, so it will be called before Connectors are init, and here, using any java based libs (Spring RestTemplate?) , call API, get values, and store them somewhere (context? objectStore?) , so the connector can access them.
Make sense? Any other ideas?
Thanks!
mmm you could make a class that will create the properties in the startup and in this class obtain the API properties via http request. Example below:
public class PropertyInit implements InitializingBean,FactoryBean {
private Properties props = new Properties();
#Override
public Object getObject() throws Exception {
return props;
}
#Override
public Class getObjectType() {
return Properties.class;
}
}
Now you should be able to load this property class with:
<context:property-placeholder properties-ref="propertyInit"/>
Hope you like this idea. I used this approach in a previous project.
I want to give you first a strong warning on doing this. If you go down this path then you risk breaking your application in very strange ways because if any other components depend on this component you are having dynamic components on startup, you will break them, and you should think if there are other ways to achieve this behaviour instead of using properties.
That said the way to do this would be to use a proxy pattern, which is a proxy for the component you recreate whenever its properties are changed. So you will need to create a class which extends Circuit Breaker, which encapsulates and instance of Circuit Breaker which is recreated whenever its properties change. These properties must not be used outside of the proxy class as other components may read these properties at startup and then not refresh, you must keep this in mind that anything which might directly or indirectly access these properties cannot do so in their initialisation phase or your application will break.
It's worth taking a look at SpringCloudConfig which allows for you to have a properties server and then all your applications can hot-reload those properties at runtime when they change. Not sure if you can take that path in Mule if SpringCloud is supported yet but it's a nice thing to know exists.
I am working on a project that has a requirement of Pivotal GemFire.
I am unable to find a proper tutorial about how to configure gemFire with Spring Boot.
I have created a partitioned Region and I want to configure Locators as well, but I need only server-side configuration as client is handled by someone else.
I am totally new to Pivotal GemFire and really confused. I have tried creating a cache.xml but then somehow a cache.out.xml gets created and there are many issues.
#Priyanka-
Best place to start is with the Guides on spring.io. Specifically, have a look at...
"Accessing Data with GemFire"
There is also...
"Cache Data with GemFire", and...
"Accessing GemFire Data with REST"
However, these guides focus mostly on "client-side" application concerns, "data access" (over REST), "caching", etc.
Still, you can use Spring Data GemFire (in a Spring Boot application even) to configure a GemFire Server. I have many examples of this. One in particular...
"Spring Boot GemFire Server Example"
This example demonstrates how to bootstrap a Spring Boot application as a GemFire Server (technically, a peer node in the cluster). Additionally, the GemFire properties are specified Spring config and can use Spring's normal conventions (property placeholders, SpEL expression) to configure these properties, like so...
https://github.com/jxblum/spring-boot-gemfire-server-example/blob/master/src/main/java/org/example/SpringBootGemFireServer.java#L59-L84
This particular configuration makes the GemFire Server a "GemFire Manager", possibly with an embedded "Locator" (indicated by the start-locator GemFie property, not to be confused with the "locators" GemFire property which allows our node to join and "existing" cluster) as well as a GemFire CacheServer to serve GemFire cache clients (with a ClientCache).
This example creates a "Factorials" Region, with a CacheLoader (definition here) to populate the "Factorials" Region on cache misses.
Since this example starts an embedded GemFire Manager in the Spring Boot GemFire Server application process, you can even connect to it using Gfsh, like so...
gfsh> connect --jmx-manager=localhost[1099]
Then you can run "gets" on the "Factorial" Region to see it compute factorials of the numeric keys you give it.
To see more advanced configuration, have a look at my other repos, in particular the Contacts Application RI (here).
Hope this helps!
-John
Well, I had the same problem, let me share with you what worked for me, in this case I'm using Spring Boot and Pivotal GemFire as cache client.
Install and run GemFire
Read the 15 minutes quick start guide
Create a locator(let's call it locator1) and a server(server1) and a region(region1)
Go to the folder where you started the 'Gee Fish'(gfsh) and then go to the locator's folder and open the log file, in that file you can get the port your locator is using.
Now let's see the Spring boot side:
In you Application with the main method add the #EnablegemFireCaching annotation
In the method(wherever it is) you want to cache, add the #Cacheable("region1") annotation.
Now let's create a configuration file for the caching:
//this is my working class
#Configuration
public class CacheConfiguration {
#Bean
ClientCacheFactoryBean gemfireCacheClient() {
return new ClientCacheFactoryBean();
}
#Bean(name = GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME)
PoolFactoryBean gemfirePool() {
PoolFactoryBean gemfirePool = new PoolFactoryBean();
gemfirePool.addLocators(Collections.singletonList(new ConnectionEndpoint("localhost", HERE_GOES_THE_PORT_NUMBER_FROM_STEP_4)));
gemfirePool.setName(GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME);
gemfirePool.setKeepAlive(false);
gemfirePool.setPingInterval(TimeUnit.SECONDS.toMillis(5));
gemfirePool.setRetryAttempts(1);
gemfirePool.setSubscriptionEnabled(true);
gemfirePool.setThreadLocalConnections(false);
return gemfirePool;
}
#Bean
ClientRegionFactoryBean<Long, Long> getRegion(ClientCache gemfireCache, Pool gemfirePool) {
ClientRegionFactoryBean<Long, Long> region = new ClientRegionFactoryBean<>();
region.setName("region1");
region.setLookupEnabled(true);
region.setCache(gemfireCache);
region.setPool(gemfirePool);
region.setShortcut(ClientRegionShortcut.PROXY);
return region;
}
That's all!, also do not forget to serialize(implements Serializable) the class is being cached(The class your cached method is returning)
I am currently deploying my custom controls as OSGi plugins and I wanted to do the same thing with my beans. I have tried putting them into the OSGi plugin and it works fine but the only problem I have is the faces-config.
It seems it has to be called faces-config in the OSGi plugin to work but that means i can't use beans in the NSF anymore because it seems to ignore the local faces-config.
Is there a way to change the name of the faces-config in the OSGi plugin?
Something like FEATURE-faces-config.xml?
In the class in your plugin that extends AbstractXspLibrary, you can override "getFacesConfigFiles", which should return an array of strings representing paths within the plugin to additional files of any name to load as faces-config additions. For example:
#Override
public String[] getFacesConfigFiles() {
return new String[] {
"com/example/config/beans.xml"
};
}
Then you can put the config file in that path within your Java source folder (or another folder that is included in build.properties) and it will be loaded in addition to your app's normal faces-config, beans and all.
The NSFs are running as separate, distinct Java applications. The OSGi plugin is running in the OSGi layer, above all those distinct Java applications, as a single code base. Consequently, the faces-config is only at that level.
It's possible to load them dynamically, by using an ImplicitObjectFactory, loaded from an XspContributor. That's what is done in OpenNTF Domino API for e.g. userScope (which is a bean stored in applicationScope of an NSF). See org.openntf.domino.xsp.helpers.OpenntfDominoImplicitObjectFactory, which is referenced in OpenntfDominoXspContributor, loaded via the extension point of type "com.ibm.xsp.library.Contributor".
A few caveats:
You have no control over what happens if you try to register your bean with a name the developer also uses for a different variable in that scope.
Unless you add code to check if the library is enabled, as we do, you'll be adding the bean to every database on the server.
You still need to add the library to the NSF. Unless you also provide a component that those databases will all use, there's no way you can programmatically add it, as far as I know.
It might be easier to skip the bean approach and just add an instance of the Java class in beforePageLoad, page controller class, or however you're managing the backing to the relevant XPage (if viewScope) or application (if sessionScope / applicationScope).