Env: Seam 2.2, ehcache-core 2.1.0
I tried injecting the CacheProvider using the following call in my bean scoped for session
#In CacheProvider cacheProvider;
WEB-INF\components.xml contains the following line to enable the cache provider
<cache:eh-cache-provider/>
The above configuration seems to return a null value for the cache provider
Using the cache provider like this
CacheProvider cacheProvider = CacheProvider.instance();
throws the following warning
15:29:27,586 WARN [CacheManager] Creating a new instance of CacheManager using
the diskStorePath "C:\DOCUME~1\user5\LOCALS~1\Temp\" which is already used by an
existing CacheManager.
The source of the configuration was net.sf.ehcache.config.generator.Configuratio
nSource$DefaultConfigurationSource#15ed0f9.
The diskStore path for this CacheManager will be set to C:\DOCUME~1\user5\LOCALS
~1\Temp\\ehcache_auto_created_1276682367586.
To avoid this warning consider using the CacheManager factory methods to create
a singleton CacheManager or specifying a separate ehcache configuration (ehcache
.xml) for each CacheManager instance.
What am I missing here?
Keep in mind net.sf.ehcache.Cache needs to be on the classpath (I am not sure but I Think ehcache-core.jar contains this class) if you want to use EhCahceProvider. Here goes its signature
#Name("org.jboss.seam.cache.cacheProvider")
#Scope(APPLICATION)
#BypassInterceptors
#Install(value = false, precedence=BUILT_IN, classDependencies="net.sf.ehcache.Cache")
#AutoCreate
public class EhCacheProvider extends CacheProvider<CacheManager> {
Notice classDependencies attribute. Its documentation is clear
Indicates that the component should not be installed unless the the given class definitions are available on the classpath
So if your classpath contains net.sf.ehcache.Cache you do not need to declare
<cache:eh-cache-provider/>
And as it is Application scoped, you can retrieve, besides #In-jection, by using
ApplicationContext.getContext().get("cacheProvider");
UPDATE
First of all
remove <cache:eh-cache-provider/> declaration. I said you why (see above)
Second of all
Although i am pretty sure CacheProvider can not be null because #In required attribute is, by default, true, which cannot be null. Inside your business method, Make sure your CacheProvider is not null
assert cacheProvider != null
Third of all
I Think you do not need to call cacheProvider.instance() method. If its default scope is Application. Why do you want to retrieve another CacheProvider ??? It does not make sense.
Fourth of all
It is not an exception. Its is just a warning message because you is trying to use more than one cache provider where both use the same space in memory
Related
Currently I am using annotation #CacheEvict using cron job, after that reloading again using new calls, but not happy with this approach as I have multiple nods.
My requirement is
Reload cache with new data after a certain time limit.
Should be working in multiple nodes.
Looking for a elegant design.
Did you try using cache manager?
https://www.baeldung.com/spring-multiple-cache-managers
Spring #CacheEvict annotation is used to evict cache.
The #CacheEvict is used at method level.
The #Cacheable annotation sets the value in cache and on the contrary #CacheEvict evicts the cache.
At one method we can use #Cacheable to cache result and at another method we can use #CacheEvict to evict cache.
The #CacheEvict annotation is introduced in Spring 3.1.
The #CacheEvict has following attributes.
String[] cacheNames
Cache names to evict.
String[] value
Alias for cacheNames.
String key
SpEL expression for computing the key dynamically.
String keyGenerator
The bean name of the custom KeyGenerator to use.
String cacheManager
The bean name of the custom CacheManager. It is used to create default CacheResolver if none is set already.
String cacheResolver
The bean name of the custom CacheResolver to use.
String condition
SpEL expression used for making the cache eviction operation conditional.
boolean allEntries
If true, all the entries inside the cache are removed.
boolean beforeInvocation
If true, the cache eviction will occur before the method is invoked.
I think you take the risk to never be satisfied by a solution crafted inside your application code to solve an architectural design problematic of the deployment of the application.
If you need to control the behavior of a variable set of nodes, you better have to design a small independent tool (e.g. in the form of a micro-service) that will receive "meta" requests and do the right thing: calling #CacheEvict and then #Cache entry points of all nodes in the right order. The list of nodes and the list of entry points to be called could be easily defined in parameter files or in a datasource.
You can also make your pods listening to a stream of events or an AMQP broadcaster.
This is the code that I have:
#Component
#Configuration
#PropertySource("application.properties")
public class Program {
#Value("${app.title}")
private String appTitle;
public Program() {
System.out.println(appTitle);
}
}
The application.properties has
app.title=The Program
The output is null insteaf of The Program.
So, what am I missing? I have tried several examples; none worked.
Since appTitle is an autowired field, it is not set until after the object is initially constructed. This is why the value is still null in your example. The bean construction process in this scenario is as follows:
The Program constructor is called, creating a new Program instance
The appTitle field is set on the newly constructed bean to ${app.title}
The ideal fix for this depends on your goals. If you truly need the value within the constructor, you can pass it in as an autowired constructor parameter. The value will then be available within the constructor:
#Component
#Configuration
#PropertySource("application.properties")
public class Program {
public Program(#Value("${app.title}") appTitle) {
System.out.println(appTitle);
}
}
If you don't need it in the constructor itself, but need it for the proper initialization of the bean, you could alternatively use the #javax.annotation.PostConstruct annotation to make use of it after the object's construction but before it is made available for use elsewhere:
#Component
#Configuration
#PropertySource("application.properties")
public class Program {
#Value("${app.title}")
private String appTitle;
#PostConstruct
public void printAppTitle() {
System.out.println(appTitle);
}
}
Finally, if you don't need the value at construction time, but need it during the life of the bean, what you have will work; it just won't be available within the body of the constructor itself:
#Component
#Configuration
#PropertySource("application.properties")
public class Program {
#Value("${app.title}")
private String appTitle;
}
Nothing wrong, just don't do it in a constructor...
Other answers on this question are written assuming the goal is creating a Spring-managed bean that uses the given property in its creation. However, based on your comments in another answer, it looks like the question you want answered is how to access an externalized property (one provided by #Value) within a no-argument constructor. This is based on your expectation that a Java inversion of control (IoC) container such as Spring should allow accessing externalized properties (and presumably other dependencies) within a no-argument constructor. That being the case, this answer will address the specific question of accessing the property within a no-argument constructor.
While there are certainly ways this goal could be achieved, none of them would be idiomatic usage of the Spring framework. As you discovered, autowired fields (i.e. fields initialized using setter injection) cannot be accessed within the constructor.
There are two parts to explaining why this is. First, why does it work the way it does, programmatically? Second, why was it designed the way it was?
The setter-based dependency injection section of the Spring docs addresses the first question:
Setter-based DI is accomplished by the container calling setter methods on your beans after invoking a no-argument constructor or a no-argument static factory method to instantiate your bean.
In this case, it means that first the object is created using the no-argument constructor. Second, once the object is constructed, the appTitle is initialized on the constructed bean. Since the field isn't initialized until after the object is constructed, it will have its default value of null within the constructor.
The second question is why Spring is designed this way, rather than somehow having access to the property within the constructor. The constructor-based or setter-based DI? sidebar within the Spring documentation makes it clear that constructor arguments are in fact the idiomatic approach when dealing with mandatory dependencies in general.
Since you can mix constructor-based and setter-based DI, it is a good rule of thumb to use constructors for mandatory dependencies and setter methods or configuration methods for optional dependencies. [...]
The Spring team generally advocates constructor injection, as it lets you implement application components as immutable objects and ensures that required dependencies are not null. Furthermore, constructor-injected components are always returned to the client (calling) code in a fully initialized state. [...]
Setter injection should primarily only be used for optional dependencies that can be assigned reasonable default values within the class. [...]
A property needed to construct the object certainly would be categorized as a mandatory dependency. Therefore, idiomatic Spring usage would be to pass in this required value in the constructor.
So in summary, trying to access an application property within a no-argument constructor is not supported by the Spring framework, and in fact runs contrary to the recommended use of the framework.
I've faced an issue that on rare occasions (it might take dozens of restarts) Spring doesn't initialize all properties correctly.
I define the bean of CbKafkaConsumerConfig (my custom bean) type and check its state in the thread that was created by a method that is marked as #EventListener(ApplicationReadyEvent.class), so I expect it to be completely initialized by this point. However, this is what I see:
Values that I expected to be filled are left with placeholders.
Here's how they are defined in application.properties file. (And I've checked the spelling - it's correct, otherwise it would fail every time, not occasionally)
config-bean-prefix.msg-topics=${cb.kafka.tc-topic}
config-bean-prefix.unexpected-error-topic=${cb.kafka.unexpected-errors-topic}
These properties are defined in Vault and I expected them to be fetched and set with the power of Spring Cloud Vault. Here you can see that Vault is present as a property source AND that these properties are populated there.
At the same time, in the context there are other beans of the same type CbKafkaConsumerConfig that are referring to these properties and yet it resolved fine for them.
Here's how the bean is defined
#Bean({"myBean"})
#ConfigurationProperties(
prefix = "config-bean-prefix"
)
public CbKafkaConsumerConfig myBeanConsumer() {
return new CbKafkaConsumerConfig();
}
And the bean itself:
#Data
public class CbKafkaConsumerConfig extends CbKafkaBaseConfig {
#NotNull
#Size(
min = 1
)
private Collection<String> msgTopics;
#NotNull
private String unExpectedErrorTopic;
}
We're using Spring Boot 2.2.x however this issue is also present for Spring Boot 2.1.x.
It's not specific for this type of beans, other might fail as well while being correctly set in Vault. What could be the reason of such unpredictable behavior and what I should look into?
Turns out by default spring cloud vault is not simply fetching properties on start, every so often it's updating them. While updating, there's a short time window when properties were already deleted from property source in the context, but not filled with the new ones and it might actually happen during context initialization (super questionable behavior in my opinion) causing some beans being corrupted.
If you don't want properties to be updated in runtime just set spring.cloud.vault.config.lifecycle.enabled to false
What is the difference between net.sf.ehcache and org.ehcache?
The current version of net.sf.ehcache is 2.10.5 whereas same for org.ehcache is 3.5.2.
Spring uses net.sf.ehcache's CacheManager, and org.ehcache's CacheManager isn't compatible for same.
Is there any specific reason for this? Please explain.
As you can verify on the page http://www.ehcache.org/downloads/, Ehcache 3 is using the package prefix org.ehcache and Ehcache 2 is using the package prefix net.sf.ehcache. That's it.
There are different in many levels. With ehcache 3.x, Element is not there anymore. One should directly put the key and value in the Cache therefore you can provide types when you create cache:
Cache<Long, String> myCache = cacheManager.getCache("myCache", Long.class, String.class);
And consequently when retrieving the value, you avoid the hassle of getObjectValue instead you just treat Cache like a ConcurrentMap. So you won't get NullPointerException if the key doesn't exist, so you won't need check for cache.get(cacheKey) != null
cache.get(cacheKey);
The way to instantiate CacheManager has also changed. You won't getInstance so it is not singleton anymore. Instead you get a builder, which is way nicer, especially that you can provide it with configuration parameters inline:
CacheManager cacheManager = CacheManagerBuilder.newCacheManagerBuilder()
.withCache("preConfigured",
CacheConfigurationBuilder.newCacheConfigurationBuilder(Long.class, String.class,
ResourcePoolsBuilder.heap(100))
.build())
.build(true);
We are migrating from Gemfire 8.2.7 to 9.2.1
As part of Gemfire startup, we leverage SpringContextBootstrappingInitializer to initialize the spring-beans which #Autowire the Cache.
The same code when migrated to Gemfire 9.2.1 (along with the other stack) is failing on server startup with below error.
Gemfire 8.2.7 --> Gemfire 9.2.1
Spring-data-Gemfire 1.8.4 --> 2.0.2
Spring-Boot 1.4.7 --> 2.0.0.M7
Spring --> 5.0.2
Caused by:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
qualifying bean of type 'org.apache.geode.cache.Cache' available:
expected at least 1 bean which qualifies as autowire candidate.
Dependency annotations:
{#org.springframework.beans.factory.annotation.Autowired(required=true)}
Any pointers / changes required for GemfireConfig? Below is our JavaConfig.
#Bean
public CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
Looks like the ComponentScan is kicking in prior to Configuration processor. Any idea on controlling this behavior? This was lasted tested to work in Spring-Boot 1.4.6 (Spring- 4.3.8) and gets resolved with a #Depends option - but just wanted to understand if there are any fundamental changes with the ordering of bean initialization with newer Spring version.
#Configuration
#EnableAutoConfiguration(exclude = { HibernateJpaAutoConfiguration.class, BatchAutoConfiguration.class })
#Import(value = { GemfireServerConfig.class, JpaConfiguration.class, JpaConfigurableProperties.class })
#ComponentScan(basePackages = "com.test.gemfire", excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class) )
To begin, let me give you some tips since there are 3 issues with your problem statement above...
1) First, you have not made it clear why or how you are using the o.s.d.g.support.SpringContextBootstrappingInitializer Docs here.
I can only assume it is because you are launching your GemFire servers with Gfsh
using the following command...
gfsh> start server --name=MyServer --cache-xml-file=/path/to/cache.xml ...
Where your cache.xml is defined similar to this. After all, this was the original intent for using the SpringContextBootstrappingInitializer.
If this is the case, why not use the Gfsh, start server command, --spring-xml-location option instead. For example:
gfsh> start server --name=MyServer --spring-xml-location=/by/default/a/classpath/to/applicationContext.xml --classpath=/path/to/spring-data-gemfire-2.0.2.RELEASE.jar:...
By doing so, you no longer need to provide cache.xml just to declare the SpringContextBootstrappingInitializer in order to bootstrap a Spring container inside the GemFire JVM process. You can simply use the --spring-xml-location option and put SDG on the server's classpath when starting the server.
2) Second, it is not apparent what type of application component/bean you are injecting a GemFire Cache reference into (e.g. a Region or another application component class, like a DAO, etc). Providing a snippet of code showing how you injected the Cache reference, i.e. the injection point using the #Autowired annotation would have been helpful. For example:
#Service
class MyService {
#Autowired
private Cache gemfireCache;
...
}
3) #2 would have been more apparent if you included the full stack trace rather than just the NoSuchBeanDefinitionException message.
Despite the issues with your problem statement, I can infer the following:
Clearly, you are using "classpath component scanning" (with the #ComponentScan annotation) and are auto-wiring "by type"; which maybe key actually; I will come back to this later below.
You are using Spring's #Autowired annotation on a bean class field (field injection) or property (setter injection), maybe even a constructor.
The type of this field/property (or constructor parameter) is definitely org.apache.geode.cache.Cache.
Moving on...
In general, Spring will follow dependency order first and foremost. That is, if A depends on B, then B must be created before and destroyed after A. Typically, Spring will and can honor this without incident.
Beyond "dependency order" bean creation and satisfying dependencies between beans (including with the #DependsOn annotation), the order of bean creation is pretty loosely defined.
There are several factors that can influence it, such as "registration order" (i.e. the order in which bean definitions are declared, which is particularly true for beans defined in XML), "import order" (when using the #Import annotation on #Configuration classes), Java reflection (includes #Bean definitions declared in #Configuration classes), etc. Configuration organization is definitely important and should not be taken lightly.
This is 1 reason why I am not a big proponent of "classpath component scanning. While it may be convenient, it is always better, IMO, to be more "explicit" in your configuration, and the organization of your configuration, for reasons outlined here in addition to other non-apparent limitations. At worst, you should definitely be limiting the scope of the scan.
Ironically, you excluded/filtered the 1 thing that could actually help your organizational concerns... components of type #Configuration:
... excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class)
NOTE: given the exclusion, are you certain you did not exclude the the 1 #Configuration class containing your CacheFactoryBean definition? I suppose not since you say this worked after including the #DependsOn annotation.
Clearly there is a dependency defined between some application component of yours (??) and a bean of type o.a.g.cache.Cache (using #Autowired), yet Spring is failing to resolve it.
My thinking is, Spring cannot resolve the Cache dependency because 1) the GemFire cache bean either has not been created yet and 2) Spring cannot find an appropriate bean definition of the desired type (i.e. o.a.g.cache.Cache) in your configuration that would resolve the dependency and force the GemFire Cache to be created first, or 3) the GemFire Cache bean has been created first but Spring is unable to resolve the type as o.a.g.cache.Cache.
I have encountered both scenarios before and it is not exactly clear to me when each scenario happens because I simply have not traced this through yet. I have simply corrected it and moved on. I have noticed that it is version related though.
There are several ways to solve this problem.
If the problem is the later, 3), then simply declaring your dependency as type o.a.g.cache.GemFireCache should resolve the problem. So, for example:
#Repository
class MyDataAccessObject {
#Autowired
private GemFireCache gemfireCache;
...
}
The reason for this is because the o.s.d.g.CacheFactoryBean class's getObjectType() method returns a Class type generically extending o.a.g.cache.GemFireCache. This was by design since o.s.d.g.client.ClientCacheFactoryBean extends o.s.d.g.CacheFactoryBean, though I probably would not have done it that way if I had created these classes. However, it is consistent with the fact that the actual cache type in GemFire is o.a.g.internal.cache.GemFireCacheImpl which indirectly implements both the o.a.g.cache.Cache interface as well as the o.a.g.cache.client.ClientCache interface.
If your problem is the former (1) + 2), which is a bit trickier), then I would suggest you employ a smarter organization of your configuration, separated by concern. For example, you can encapsulate your GemFire configuration with:
#Configuration
class GemFireConfiguration {
// define GemFire components (e.g. CacheFactoryBean) here
}
Then, your application components, where some are dependent on GemFire components, can be defined with:
#Configuration
#Import(GemFireConfiguration.class)
class ApplicationConfiguration {
// define application beans, including beans dependent on GemFire components
}
By importing the GemFireConfiguration you are ensuring the GemFire components/beans are created (instantiated, configured and initialized) first.
You can even employ more targeted, limited "classpath component scanning" at the ApplicationConfiguration class-level in cases where you have a large number of application components (services, DAO, etc).
Then, you can have your main, Spring Boot application class drive all this:
#Configuration
#Import(ApplicationConfiguration.class)
class MySpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(MySpringBootApplication.class, args);
}
}
The point is, you can be as granular as you choose. I like to encapsulate configuration by concern and clearly organize the configuration (using imports) to reflect the order in which I want my components created (constructed, configured and initialized).
Honestly, I basically organize my configuration in the order of dependencies. If my application ultimately depends on a data store and cannot function without that data store, then it makes since to ensure that is initialized first, otherwise, what is the point of starting the application.
Finally, you can always rely on the #DependsOn annotation, as you have appropriately done, to ensure that Spring will create the component before the component that expects it.
Based on the fact that the #DependsOn annotation solved your problem, then I would say this is an organizational problem and falls under the 1) / 2) category I outlined above.
I am going to dig into this a bit deeper and respond to my answer in comments with what I find.
Hope this helps!
-John