ehcache-clustered not working in OSGi when project is installed several times - osgi

Having troubles with clustered ehcache in osgi/aem. Only with first project build/installation it works fine, but with second build/installation it stops working, generate a lot of errors. It looks like terracotta connection, cachemanager or something third is not properly closed.
Even after deleting bundles it tries connect to terracotta.
ok log, errors in log
I'm installing ehcache and ehcache-clustered as standalone bundles in osgi. Also tried with embedding them into my bundle. Ehcache and ehcache-clustered are set as dependencies, also tried with org.apache.servicemix.bundles.javax-cache-api (embedding, not sure if it's needed)
First time all ehcache and ehcache-clustered services are active, 2nd time are satisfied.
Ehcache bundle, ehcache-clustered bundle, javax-cache-api bundle, my project bundle
pom.xml
Same code I have tired as standalone java app and it works perfectly fine (https://github.com/ehcache/ehcache3-samples/blob/master/clustered/src/main/java/org/ehcache/sample/ClusteredXML.java)
So not sure what I have missed (dependencies, import packages..)?
ehcache config, terracotta config
#Activate
private void activate() {
LOGGER.info("Creating clustered cache manager from XML");
URL myUrl = ClusteredService.class.getResource("/com/myco/services/ehcache-clustered-local2.xml");
Configuration xmlConfig = new XmlConfiguration(myUrl);
try (CacheManager cacheManager = CacheManagerBuilder.newCacheManager(xmlConfig) ) {
cacheManager.init();
org.ehcache.Cache<String, String> basicCache = cacheManager.getCache("basicCache4", String.class, String.class);
LOGGER.info("1. Putting to cache");
basicCache.put("1","abc");
LOGGER.info("1. Getting from cache");
String value = basicCache.get("1");
LOGGER.info("1. Retrieved '{}'", value);
LOGGER.info("cache manager status2, " + cacheManager.getStatus().toString());
}
}

You have to also create a #Deactivate method where you do a cacheManager.shutdown();
I guess if you call your code in a non OSGi project twice you would also experience the same error.

Related

Hazelcast Cache Manager: Cannot overwrite a Cache's CacheManager

On an application I am working I am trying to upgrade from Hazelcast 3.6 to 3.12.4 and I am encountering some problems which reproduce easily when two or more tests are ran together. The tests are all annotated with #WebAppConfiguration and include the Spring's application configuration using ContextConfiguration(classes = {AppConfig.class})
As part of the configuration, I have a #Bean that called CacheAwareStorage that initiates the CacheManager. THe initialization is quite basic:
public Cache<T, V> initCache(String name, Class<T> type, Class<T> valueType) {
Cache<T, V> cache = manager.getCache(cacheName, keyType, valueType);
if (cache != null)
{
return cache;
}
cache = manager.createCache(cacheName, config);
return cache;
}
The problem occurs when the context is refreshed as part of the test suit, which I think is done in AbstractTestNGSpringContextTests since I don't explicitly refresh the context. The following error occurs which result sin only the first class of tests to pass:
GenericWebApplicationContext: Refreshing org.springframework.web.context.support.GenericWebApplicationContext#6170989a
....
WARN GenericWebApplicationContext: Exception encountered during context initialization - cancelling refresh attempt
....
Factory method 'tokenStore' threw exception
nested exception is java.lang.IllegalStateException: Cannot overwrite a Cache's CacheManager.
Looking over what has changed, I see that the AbstracthazelcastCacheManager throws an IllegalStateException which comes from the Hazelcast CacheProxy. To be more precise, the manager.getCache() -> getCacheUnchecked() -> creates a cache proxy in createCacheProxy() -> and set's the proxy's manager to the current manager in cacheProxy.setCacheManager().
Starting with Hazelcast v3.9, this is no longer allowed once the manager has already been set.
What would be a solution for this? It may be that there is a bug in Hazelcast (there is no check if the manager that is being set is actually different than the already existing one), however I am looking for something that I can do on my side. Why the 'getCache()' tries to re-create the proxy is another thing that I do not understand.
I assume that I must do something so that the Context is not refreshed, however I don't know how (if at all) I can do that.
The problem was due to the way the Cache manager Bean was created. I used the internal Hazelcast cache manager and a new instance was created each time. Using the JCache API as bellow, solved the problem
#Bean
public CacheManager cacheManager() {
HazelcastServerCachingProvider provider = Caching.getCachingProvider(); // or add class name of Hazelcast server caching provider to disambiguate
return provider.getCacheManager(null, null, HazelcastCachingProvider.propertiesByInstanceItself(HAZELCAST_INSTANCE));
}
Help received from Hazelcast team on this: https://github.com/hazelcast/hazelcast/issues/16212

Environment Configuration Spring Boot

Created a Spring Boot application that will need to migrate from "Local Dev" to "Test", "QA" and "Prod" environments.
Application currently uses a "application.properties" for database connectivity and Kafka configuration.
I am wanting to deploy to "Test" and realized that the properties will not work for that enviornment. After reading the ref docs, it looks like I can simply copy the application.properties file and add a new one application-test.properties, so on, and then run the standalone jar with a -Dspring.profiles.active=test and that seems to work.
But by the time I am done, that means I h ave 4 different appliction-XXXXX.properties files in the jar which may or may not be bad. I know the ultimate configuration would be to use Spring Config server, but right now we are not there with regards to this.
Can anyone validate that using multiple properties files is viable and will work for a bit, or if I am looking at th is all wrong. I do not want to have configuration on the servers in each environment, as I am thinking these mini-services should be self-contained.
Any input would be appreciated.
in a word, your configuration file should be outside your source code.
#PropertySource(value = {"classpath:system.properties"})
public class EnvironmentConfig {
#Bean
public static PropertySourcesPlaceholderConfigurer properties() {
return new PropertySourcesPlaceholderConfigurer();
}
}
Let's say it's named "system.properties", which will be uploaded to server at deployment stage under your application classpath.

Reload property value when external property file changes ,spring boot

I am using spring boot, and I have two external properties files, so that I can easily change its value.
But I hope spring app will reload the changed value when it is updated, just like reading from files. Since property file is easy enough to meet my need, I hope I don' nessarily need a db or file.
I use two different ways to load property value, code sample will like:
#RestController
public class Prop1Controller{
#Value("${prop1}")
private String prop1;
#RequestMapping(value="/prop1",method = RequestMethod.GET)
public String getProp() {
return prop1;
}
}
#RestController
public class Prop2Controller{
#Autowired
private Environment env;
#RequestMapping(value="/prop2/{sysId}",method = RequestMethod.GET)
public String prop2(#PathVariable String sysId) {
return env.getProperty("prop2."+sysId);
}
}
I will boot my application with
-Dspring.config.location=conf/my.properties
I'm afraid you will need to restart Spring context.
I think the only way to achieve your need is to enable spring-cloud. There is a refresh endpoint /refresh which refreshes the context and beans.
I'm not quite sure if you need a spring-cloud-config-server (its a microservice and very easy to build) where your config is stored(Git or svn). Or if its also useable just by the application.properties file in the application.
Here you can find the doc to the refresh scope and spring cloud.
You should be able to use Spring Cloud for that
Add this as a dependency
compile group: 'org.springframework.cloud', name: 'spring-cloud-starter', version: '1.1.2.RELEASE'
And then use #RefreshScope annotation
A Spring #Bean that is marked as #RefreshScope will get special treatment when there is a configuration change. This addresses the problem of stateful beans that only get their configuration injected when they are initialized. For instance if a DataSource has open connections when the database URL is changed via the Environment, we probably want the holders of those connections to be able to complete what they are doing. Then the next time someone borrows a connection from the pool he gets one with the new URL.
Also relevant if you have Spring Actuator
For a Spring Boot Actuator application there are some additional management endpoints:
POST to
/env to update the Environment and rebind #ConfigurationProperties and log levels
/refresh for re-loading the boot strap context and refreshing the #RefreshScope beans
Spring Cloud Doc
(1) Spring Cloud's RestartEndPoint
You may use the RestartEndPoint: Programatically restart Spring Boot application / Refresh Spring Context
RestartEndPoint is an Actuator EndPoint, bundled with spring-cloud-context.
However, RestartEndPoint will not monitor for file changes, you'll have to handle that yourself.
(2) devtools
I don't know if this is for a production application or not. You may hack devtools a little to do what you want.
Take a look at this other answer I wrote for another question: Force enable spring-boot DevTools when running Jar
Devtools monitors for file changes:
Applications that use spring-boot-devtools will automatically restart
whenever files on the classpath change.
Technically, devtools is built to only work within an IDE. With the hack, it also works when launched from a jar. However, I may not do that for a real production application, you decide if it fits your needs.
I know this is a old thread, but it will help someone in future.
You can use a scheduler to periodically refresh properties.
//MyApplication.java
#EnableScheduling
//application.properties
management.endpoint.refresh.enabled = true
//ContextRefreshConfig.java
#Autowired
private RefreshEndpoint refreshEndpoint;
#Scheduled(fixedDelay = 60000, initialDelay = 10000)
public Collection<String> refreshContext() {
final Collection<String> properties = refreshEndpoint.refresh();
LOGGER.log(Level.INFO, "Refreshed Properties {0}", properties);
return properties;
}
//add spring-cloud-starter to the pom file.
Attribues annotated with #Value is refreshed if the bean is annotated with #RefreshScope.
Configurations annotated with #ConfigurationProperties is refreshed without #RefreshScope.
Hope this will help.
You can follow the ContextRefresher.refresh() code implements.
public synchronized Set<String> refresh() {
Map<String, Object> before = extract(
this.context.getEnvironment().getPropertySources());
addConfigFilesToEnvironment();
Set<String> keys = changes(before,
extract(this.context.getEnvironment().getPropertySources())).keySet();
this.context.publishEvent(new EnvironmentChangeEvent(context, keys));
this.scope.refreshAll();
return keys;
}

Develop programmatically a Jgroup Channel for Infinispan in a Cluster

I'm working with infinispan 8.1.0 Final and Wildfly 10 in a cluster set up.
Each server is started running
C:\wildfly-10\bin\standalone.bat --server-config=standalone-ha.xml -b 10.09.139.215 -u 230.0.0.4 -Djboss.node.name=MyNode
I want to use Infinispan in distributed mode in order to have a distributed cache. But for mandatory requirements I need to build a JGroups channel for dynamically reading some properties from a file.
This channel is necessary for me to build a cluster-group based on TYPE and NAME (for example Type1-MyCluster). Each server who wants to join a cluster has to use the related channel.
Sailing the net I have found some code like the one below:
public class JGroupsChannelServiceActivator implements ServiceActivator {
#Override
public void activate(ServiceActivatorContext context) {
stackName = "udp";
try {
channelServiceName = ChannelService.getServiceName(CHANNEL_NAME);
createChannel(context.getServiceTarget());
} catch (IllegalStateException e) {
log.log(Level.INFO, "channel seems to already exist, skipping creation and binding.");
}
}
void createChannel(ServiceTarget target) {
InjectedValue<ChannelFactory> channelFactory = new InjectedValue<>();
ServiceName serviceName = ChannelFactoryService.getServiceName(stackName);
ChannelService channelService = new ChannelService(CHANNEL_NAME, channelFactory);
target.addService(channelServiceName, channelService)
.addDependency(serviceName, ChannelFactory.class, channelFactory).install();
}
I have created the META-INF/services/....JGroupsChannelServiceActivator file.
When I deploy my war into the server, the operation fails with this error:
"{\"WFLYCTL0180: Services with missing/unavailable dependencies\" => [\"jboss.jgroups.channel.clusterWatchdog is missing [jboss.jgroups.stack.udp]\"]}"
What am I doing wrong?
How can I build a channel the way I need?
In what way I can tell to infinispan to use that channel for distributed caching?
The proposal you found is implementation dependent and might cause a lot of problems during the upgrade. I wouldn't recommend it.
Let me check if I understand your problem correctly - you need to be able to create a JGroups channel manually because you use some custom properties for it.
If that is the case - you could obtain a JGroups channel as suggested here. But then you obtain a JChannel instance which is already connected (so this might be too late for your case).
Unfortunately since Wildfly manages the JChannel (it is required for clustering sessions, EJB etc) the only way to get full control of JChannel creating process is using Infinispan embedded (library) mode. This would require adding infinispan-embedded into your WAR dependencies. After that you can initialize it similarly to this test.

Spring Cache is not cleared on production

I am using spring cache mechanism with SimpleCacheManager/ConcurrentMapCache.
And I am using a web service to clear the cache and the following is the code .
for(String cacheName : cacheManager.getCacheNames()){
Cache cache =cacheManager.getCache(cacheName);
if(cache!=null){
cache.clear();
}
}
When I called this code from a Rest webservice on local vm , I can see its clearing the cache and can see the changes that we done in the database with other service , However on the production environment , the webservice returning 200 status in the logs. but it still shows the old data.
On production we have 2 servers
We have to restart our application to refresh the cache and get the latest data from the database.
I used to do this creating a void method annotated with #CacheEvict(allEntries=true), this annotation is similar to #CacheRemoveAll from JSR-107.
Something like that:
#CacheEvict(allEntries=true)
public void evictAll() {
// Do nothing
}
I know, it's ugly, but works to me.
My two cents, avoid to use the default spring cache manager in production, use a cache manager more sophisticated instead like Guava or EhCache.
Cheers.

Resources