I implemented websockets into my application. I copied the configuration and dependencies from jHipster generated app, but I am getting the following errors:
java.lang.IllegalArgumentException: No 'javax.websocket.server.ServerContainer' ServletContext attribute. Are you running in a Servlet container that supports JSR-356?
and
org.apache.catalina.connector.ClientAbortException: java.io.IOException: An established connection was aborted by the software in your host machine
I believe these errors are the reason for the socket connection not being consistent and the therefore the client is not able to send and/or receive any messages.
I searched for a solution but other post didn't help (ie. adding glassfish dependencies).
These are my ws dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-websocket</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-messaging</artifactId>
</dependency>
Do I need to include some other dependencies or is the problem elsewhere?
I found a solution here.
I added these 2 beans:
#Bean
public TomcatServletWebServerFactory tomcatContainerFactory() {
TomcatServletWebServerFactory factory = new TomcatServletWebServerFactory();;
factory.setTomcatContextCustomizers(Collections.singletonList(tomcatContextCustomizer()));
return factory;
}
#Bean
public TomcatContextCustomizer tomcatContextCustomizer() {
return new TomcatContextCustomizer() {
#Override
public void customize(Context context) {
context.addServletContainerInitializer(new WsSci(), null);
}
};
}
Related
I'm getting JTA exception while running an example from the Pro Spring 5 book (12. Using Spring Remote, boot-jms project). Here is the entirety of the code (exclude the imports). It only has one file Application.java:
#SpringBootApplication
public class Application {
private static Logger logger = LoggerFactory.getLogger(Application.class);
#Bean
public JmsListenerContainerFactory<DefaultMessageListenerContainer> connectionFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot's default if necessary.
return factory;
}
public static void main(String... args) throws Exception {
ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args);
JmsTemplate jmsTemplate = ctx.getBean(JmsTemplate.class);
jmsTemplate.setDeliveryDelay(5000L);
for (int i = 0; i < 10; ++i) {
logger.info(">>> Sending: Test message: " + i);
jmsTemplate.convertAndSend("prospring5", "Test message: " + i);
}
System.in.read();
ctx.close();
}
#JmsListener(destination = "prospring5", containerFactory = "connectionFactory")
public void onMessage(Message message) {
TextMessage textMessage = (TextMessage) message;
try {
logger.info(">>> Received: " + textMessage.getText());
} catch (JMSException ex) {
logger.error("JMS error", ex);
}
}
}
First, I got the following error:
Exception in thread "main" java.lang.AbstractMethodError:
com.atomikos.jms.AtomikosJmsMessageProducerProxy.setDeliveryDelay(J)V
at
org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:628)
at
org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:608)
at
org.springframework.jms.core.JmsTemplate.lambda$send$3(JmsTemplate.java:586)
at
org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:504)
at
org.springframework.jms.core.JmsTemplate.send(JmsTemplate.java:584)
at
org.springframework.jms.core.JmsTemplate.convertAndSend(JmsTemplate.java:661)
at com.apress.prospring5.ch12.Application.main(Application.java:54)
Then I commented out "jmsTemplate.setDeliveryDelay(5000L);". So I was able to get pass this error but only end up with the JTA exception.
Exception in thread "main"
org.springframework.jms.UncategorizedJmsException: Uncategorized
exception occurred during JMS processing; nested exception is
com.atomikos.jms.AtomikosTransactionRequiredJMSException: The JMS
session you are using requires a JTA transaction context for the
calling thread and none was found. Please correct your code to do one
of the following:
start a JTA transaction if you want your JMS operations to be subject to JTA commit/rollback, or
increase the maxPoolSize of the AtomikosConnectionFactoryBean to avoid transaction timeout while waiting for a connection, or
create a non-transacted session and do session acknowledgment yourself, or
set localTransactionMode to true so connection-level commit/rollback are enabled. at
org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:311)
at
org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:185)
at
org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:507)
at
org.springframework.jms.core.JmsTemplate.send(JmsTemplate.java:584)
at
org.springframework.jms.core.JmsTemplate.convertAndSend(JmsTemplate.java:661)
at com.apress.prospring5.ch12.Application.main(Application.java:54)
Caused by: com.atomikos.jms.AtomikosTransactionRequiredJMSException:
The JMS session you are using requires a JTA transaction context for
the calling thread and none was found. Please correct your code to do
one of the following:
start a JTA transaction if you want your JMS operations to be subject to JTA commit/rollback, or
increase the maxPoolSize of the AtomikosConnectionFactoryBean to avoid transaction timeout while waiting for a connection, or
create a non-transacted session and do session acknowledgment yourself, or
set localTransactionMode to true so connection-level commit/rollback are enabled. at
com.atomikos.jms.AtomikosTransactionRequiredJMSException.throwAtomikosTransactionRequiredJMSException(AtomikosTransactionRequiredJMSException.java:23)
at
com.atomikos.jms.ConsumerProducerSupport.enlist(ConsumerProducerSupport.java:90)
at
com.atomikos.jms.AtomikosJmsMessageProducerProxy.send(AtomikosJmsMessageProducerProxy.java:34)
at
org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:634)
at
org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:608)
at
org.springframework.jms.core.JmsTemplate.lambda$send$3(JmsTemplate.java:586)
at
org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:504)
... 3 more
My project includes the dependency of spring-boot-starter-artemis and spring-boot-starter-jta-atomikos:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-artemis</artifactId>
<version>2.0.6.RELEASE</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>artemis-jms-server</artifactId>
<version>2.4.0</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jta-atomikos</artifactId>
<version>2.0.6.RELEASE</version>
</dependency>
<dependency>
<groupId>com.atomikos</groupId>
<artifactId>transactions-hibernate4</artifactId>
<version>4.0.4</version>
</dependency>
The error message seems suggesting the Transaction is missing, so I had added #Transactional annotation at the listener and jmsTemplate, but the same problem still persists.
The problem seems is that I had included spring-boot-starter-jta-atomikos and transactions-hibernate4 in the dependencies. Once I removed these 2 dependencies, the code works. The intention of this example is not to do transaction but demonstrate spring-boot doing JMS over artemis. But since I have these 2 transaction related files in classpath, spring-boot automatically asks for transaction.
Having a Spring Boot project working with JDK11, Primefaces 8.0, Spring Boot 2.3.0.
deploying it on tomcat 9.0.35. In some deployments my fileupload component is able to trigger the listener method well. In some other, it can't trigger it leaving no error message or log.
I have tried some restarts producing every time same results (fail to upload) with the same build. But despite having not touched the source, another build can make it work.
On another test, I have built & deployed 4-5 times the project with exactly same source code, seeing upload is working in all of them. And for a last test, I just added a space character after a java statement's ';' to change the binary and rebuilt, redeployed and noticed file upload not working.
I can't find out why the behaviour is not stable.
I am stuck and have no idea how to debug it, identify the problem. Any suggestion will be welcomed
At pom.xml having:
<dependency>
<groupId>commons-fileupload</groupId>
<artifactId>commons-fileupload</artifactId>
<version>1.4</version>
</dependency>
<dependency>
<groupId>com.sun.faces</groupId>
<artifactId>jsf-api</artifactId>
<version>2.2.20</version>
</dependency>
<dependency>
<groupId>com.sun.faces</groupId>
<artifactId>jsf-impl</artifactId>
<version>2.2.20</version>
</dependency>
<dependency>
<groupId>org.primefaces</groupId>
<artifactId>primefaces</artifactId>
<version>8.0</version>
</dependency>
<dependency>
<groupId>com.google.code</groupId>
<artifactId>kaptcha</artifactId>
<version>2.3.0</version>
</dependency>
FileUpload component on page:
<h:form id="bulkDataInsertForm" enctype="multipart/form-data">
.
.
<p:fileUpload id="datafileuploader"
listener="#{bulkDataInsertBean.handleFileUpload}"
uploadLabel="upload file"
cancelLabel="cancel"
label="choose file"
update=":bulkDataInsertForm:bulkDataInsertgrowl :bulkDataInsertForm:listFileUploadPanel :bulkDataInsertForm:errorText"
allowTypes="/(\.|\/)(xlsx)$/"
sizeLimit="10485760"
multiple="false"
invalidFileMessage="file type error"
mode="advanced" dragDropSupport="true"
ajax="true">
</p:fileUpload>
.
.
</h:form>
I have <h:head> in parent page as told here: How to use PrimeFaces p:fileUpload? Listener method is never invoked or UploadedFile is null / throws an error / not usable.
And ServletInitializer:
#EnableEncryptableProperties
#SpringBootApplication
#ComponentScan({ "com.myapp" })
public class WebApplication extends SpringBootServletInitializer {
#Bean
public ServletRegistrationBean kaptchaServletRegistration() {
ServletRegistrationBean bean = new ServletRegistrationBean(new KaptchaServlet(), "/kaptcha.jpg");
return bean;
}
#Bean
public ServletRegistrationBean facesServletRegistration() {
ServletRegistrationBean registration = new ServletRegistrationBean<>(new FacesServlet(), "*.xhtml");
registration.setLoadOnStartup(1);
return registration;
}
#Bean
public ServletContextInitializer servletContextInitializer() {
return servletContext -> {
servletContext.setInitParameter("com.sun.faces.forceLoadConfiguration", Boolean.TRUE.toString());
servletContext.setInitParameter("primefaces.THEME", "blitzer");
servletContext.setInitParameter("primefaces.CLIENT_SIDE_VALIDATION", Boolean.TRUE.toString());
servletContext.setInitParameter("javax.faces.FACELETS_SKIP_COMMENTS", Boolean.TRUE.toString());
servletContext.setInitParameter("primefaces.FONT_AWESOME", Boolean.TRUE.toString());
servletContext.setInitParameter("javax.faces.ENABLE_CDI_RESOLVER_CHAIN", Boolean.TRUE.toString());
};
#Bean
public ServletListenerRegistrationBean<ConfigureListener> jsfConfigureListener() {
return new ServletListenerRegistrationBean<>(new ConfigureListener());
}
//for setting fileUploadFilter to in front of filterChain - so uploaded file not consumed by other filter
#Bean
public FilterRegistrationBean primeFacesFileUploadFilter() {
FilterRegistrationBean registration = new FilterRegistrationBean(new org.primefaces.webapp.filter.FileUploadFilter(), facesServletRegistration());
registration.addUrlPatterns("/*");
registration.setDispatcherTypes(DispatcherType.REQUEST, DispatcherType.FORWARD);
registration.setName("primeFacesFileUploadFilter");
registration.setOrder(1);
return registration;
}
}
Note: On some forums, I have read fileupload filter order can be changed, so some other filters may consume the file stream being uploaded, leaving fileupload filter with no input.
It must also accept Forwarded requests. So I added "primeFacesFileUploadFilter" shown above, but it did not help:
This is the order of filterchain during ServletContextInitializer after added the code:
Filter names at FilterChain by order: [requestContextFilter, Tomcat WebSocket (JSR356) Filter, errorPageFilter, primeFacesFileUploadFilter, characterEncodingFilter, springSecurityFilterChain, formContentFilter]
Specifying
servletContext.setInitParameter("primefaces.UPLOADER", "native");
at servletContextInitializer resulted in sometimes successful and sometimes failing(listener untriggered) fileuploads.
But after specifiying:
servletContext.setInitParameter("primefaces.UPLOADER", "commons");
instead of "native", I did nearly 10 builds, deploys, tests in which all fileuploads triggered properly. Of course I can't still guarantee its the absolute solution but
its highly likely.
I learned from here: What's the difference between Jetty and Netty?
Netty is not a server at all.
but from here: https://stackoverflow.com/a/57297055/10894456 I see it's the server.
Anyway I guess to run a web application there should be a server. So does Netty help to solve it? Or anyway need some kind of server ( Tomcat or Jetty or whatever)?
But from here: Don't spring-boot-starter-web and spring-boot-starter-webflux work together? I learned Netty is not compatible with Tomcat...
Clarify please, what is the easiest way to create a reactive WebFlux crud application? How can Netty helps? What would be the server in case of using Netty? How Netty is compatible with it?
Edit:
Ok, I see Netty is not a server by itself, need to write something like:
public class NettyServer {
private int port;
// constructor
public static void main(String[] args) throws Exception {
int port = args.length > 0
? Integer.parseInt(args[0]);
: 8080;
new NettyServer(port).run();
}
public void run() throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch)
throws Exception {
ch.pipeline().addLast(new RequestDecoder(),
new ResponseDataEncoder(),
new ProcessingHandler());
}
}).option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
ChannelFuture f = b.bind(port).sync();
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
}
}
}
But I also believe literally noone do this when creating a simple WebFlux CRUD service. So still the issue: How can Netty helps? What would be the server in case of using Netty? How Netty is compatible with it?
Edit 2: after hours of browsing I realized: Netty - is not a server, it's just a framework using cahnnels/NIO2, but spring-boot-starter-reactor-netty offers non-blocking and backpressure-ready TCP/HTTP/UDP clients & servers based on Netty framework.
But can use spring-boot-starter-tomcat, spring-boot-starter-jetty, or spring-boot-starter-undertow instead in this manner:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
<exclusions>
<!-- Exclude the Netty dependency -->
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-reactor-netty</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- Use Jetty instead -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jetty</artifactId>
</dependency>
So just 2 concepts are confusing and mess each other: Netty and spring-boot-starter-reactor-netty
After adding cache2k to my project some #SpringBootTest's stopped working with an error:
java.lang.IllegalStateException: Cache already created: 'cache'
Below I provide the minimal example to reproduce:
Go to start.spring.io and create a simplest Maven project with Cache starter, then add cache2k dependencies:
<properties>
<java.version>1.8</java.version>
<cache2k-version>1.2.2.Final</cache2k-version>
</properties>
<dependencies>
<dependency>
<groupId>org.cache2k</groupId>
<artifactId>cache2k-api</artifactId>
<version>${cache2k-version}</version>
</dependency>
<dependency>
<groupId>org.cache2k</groupId>
<artifactId>cache2k-core</artifactId>
<version>${cache2k-version}</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.cache2k</groupId>
<artifactId>cache2k-spring</artifactId>
<version>${cache2k-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
Now configure the simplest cache:
#SpringBootApplication
#EnableCaching
public class CachingDemoApplication {
public static void main(String[] args) {
SpringApplication.run(CachingDemoApplication.class, args);
}
#Bean
public CacheManager springCacheManager() {
SpringCache2kCacheManager cacheManager = new SpringCache2kCacheManager();
cacheManager.addCaches(b -> b.name("cache"));
return cacheManager;
}
}
And add any service (which we will #MockBean in one of our tests:
#Service
public class SomeService {
public String getString() {
System.out.println("Executing service method");
return "foo";
}
}
Now two #SpringBootTest tests are required to reproduce the issue:
#SpringBootTest
#RunWith(SpringRunner.class)
public class SpringBootAppTest {
#Test
public void getString() {
System.out.println("Empty test");
}
}
#RunWith(SpringRunner.class)
#SpringBootTest
public class WithMockedBeanTest {
#MockBean
SomeService service;
#Test
public void contextLoads() {
}
}
Notice that the 2nd test has mocked #MockBean. This causes an error (stacktrace below).
Caused by: java.lang.IllegalStateException: Cache already created: 'cache'
at org.cache2k.core.CacheManagerImpl.newCache(CacheManagerImpl.java:174)
at org.cache2k.core.InternalCache2kBuilder.buildAsIs(InternalCache2kBuilder.java:239)
at org.cache2k.core.InternalCache2kBuilder.build(InternalCache2kBuilder.java:182)
at org.cache2k.core.Cache2kCoreProviderImpl.createCache(Cache2kCoreProviderImpl.java:215)
at org.cache2k.Cache2kBuilder.build(Cache2kBuilder.java:837)
at org.cache2k.extra.spring.SpringCache2kCacheManager.buildAndWrap(SpringCache2kCacheManager.java:205)
at org.cache2k.extra.spring.SpringCache2kCacheManager.lambda$addCache$2(SpringCache2kCacheManager.java:143)
at java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
at org.cache2k.extra.spring.SpringCache2kCacheManager.addCache(SpringCache2kCacheManager.java:141)
at org.cache2k.extra.spring.SpringCache2kCacheManager.addCaches(SpringCache2kCacheManager.java:132)
at com.example.cachingdemo.CachingDemoApplication.springCacheManager(CachingDemoApplication.java:23)
at com.example.cachingdemo.CachingDemoApplication$$EnhancerBySpringCGLIB$$2dce99ca.CGLIB$springCacheManager$0(<generated>)
at com.example.cachingdemo.CachingDemoApplication$$EnhancerBySpringCGLIB$$2dce99ca$$FastClassBySpringCGLIB$$bbd240c0.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:363)
at com.example.cachingdemo.CachingDemoApplication$$EnhancerBySpringCGLIB$$2dce99ca.springCacheManager(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154)
... 52 more
If you remove #MockBean, both tests will pass.
How can I avoid this error in my test suite?
Your second test represents a different ApplicationContext altogether so the test framework will initiate a dedicated one for it. If cache2k is stateful (for instance sharing the CacheManager for a given classloader if it already exists), the second context will attempt to create a new CacheManager while the first one is still active.
You either need to flag one of the test as dirty (see #DirtiesContext) which will close the context and shut down the CacheManager, or you can replace the cache infrastructure by an option that does not require all that, see #AutoConfigureCache.
If cache2k works in such a way that it requires you to dirty the context, I'd highly recommend to swap it using the later options.
Since I do not want any custom behavior in test, but just want to get rid of this error, the solution is to create CacheManager using unique name like this:
#Bean
public CacheManager springCacheManager() {
SpringCache2kCacheManager cacheManager = new SpringCache2kCacheManager("spring-" + hashCode());
cacheManager.addCaches(b -> b.name("cache"));
return cacheManager;
}
I encountered the same error when using cache2k with Spring Dev Tools, and ended up with the following code as the solution:
#Bean
public CacheManager cacheManager() {
SpringCache2kCacheManager cacheManager = new SpringCache2kCacheManager();
// To avoid the "Caused by: java.lang.IllegalStateException: Cache already created:"
// error when Spring DevTools is enabled and code reloaded
if (cacheManager.getCacheNames().stream()
.filter(name -> name.equals("cache"))
.count() == 0) {
cacheManager.addCaches(
b -> b.name("cache")
);
}
return cacheManager;
}
I am looking for solution for distributed spring configuration. I am thinking of storing it in zookeeper. https://github.com/spring-cloud/spring-cloud-zookeeper does have that functionality but apparently it requires to use spring-boot.
Is there any similar library that I can use outside spring-boot
Consul by HashiCorp
Consul is a popular option because it is:
Open Source
Includes Service Discovery & Configuration
Support Multi-Datacenter out of the box
Etc.
It doesn't require you to use Spring Boot, it just provides the auto-configurations in case you do decide to go with Spring Boot. In other words, if you're not using Spring Boot, none of the configurations will apply automatically, you'll have to provide the configuration yourself.
Zookeeper is a good option, go for it.
EDIT:
To use Zookeeper without Spring Boot, you'd need to register the appropriate beans either manually or by importing the auto-configuration classes that Spring Boot would import for you implicitly. This rule of thumb generally applies to all Spring Boot-enabled modules.
In your case, you'd most likely need to import just the ZookeeperConfigBootstrapConfiguration and ZookeeperConfigAutoConfiguration. The classes are to be found within spring-cloud-zookeeper-config module so no Spring Boot dependencies needed.
Alternatively, you should look at those classes and their #Imports and declare the beans manually.
I found a solution for using spring-cloud-zookeeper without Spring Boot, based on the idea provided here https://wenku.baidu.com/view/493cf9eba300a6c30d229f49.html
First, create a CloudEnvironement class that will create a PropertySource from Zookeeper :
CloudEnvironement.java
public class CloudEnvironment extends StandardServletEnvironment {
#Override
protected void customizePropertySources(MutablePropertySources propertySources) {
super.customizePropertySources(propertySources);
try {
propertySources.addLast(initConfigServicePropertySourceLocator(this));
}
catch (Exception ex) {
logger.warn("failed to initialize cloud config environment", ex);
}
}
private PropertySource<?> initConfigServicePropertySourceLocator(Environment environment) {
ZookeeperConfigProperties configProp = new ZookeeperConfigProperties();
ZookeeperProperties props = new ZookeeperProperties();
props.setConnectString("myzookeeper:2181");
CuratorFramework fwk = curatorFramework(exponentialBackoffRetry(props), props);
ZookeeperPropertySourceLocator propertySourceLocator = new ZookeeperPropertySourceLocator(fwk, configProp);
PropertySource<?> source= propertySourceLocator.locate(environment);
return source ;
}
private CuratorFramework curatorFramework(RetryPolicy retryPolicy, ZookeeperProperties properties) {
CuratorFrameworkFactory.Builder builder = CuratorFrameworkFactory.builder();
builder.connectString(properties.getConnectString());
CuratorFramework curator = builder.retryPolicy(retryPolicy).build();
curator.start();
try {
curator.blockUntilConnected(properties.getBlockUntilConnectedWait(), properties.getBlockUntilConnectedUnit());
}
catch (InterruptedException e) {
throw new RuntimeException(e);
}
return curator;
}
private RetryPolicy exponentialBackoffRetry(ZookeeperProperties properties) {
return new ExponentialBackoffRetry(properties.getBaseSleepTimeMs(),
properties.getMaxRetries(),
properties.getMaxSleepMs());
}
}
Then create a custom XmlWebApplicationContext class : it will enable to load the PropertySource from Zookeeper when your webapplication start and replace the bootsrap magic of Spring Boot:
MyConfigurableWebApplicationContext.java
public class MyConfigurableWebApplicationContext extends XmlWebApplicationContext {
#Override
protected ConfigurableEnvironment createEnvironment() {
return new CloudEnvironment();
}
}
Last, in your web.xml file add the following context-param for using your MyConfigurableWebApplicationContext class and bootstraping your CloudEnvironement.
<context-param>
<param-name>contextClass</param-name>
<param-value>com.kiabi.config.MyConfigurableWebApplicationContext</param-value>
</context-param>
If you use a standard property file configurer, it should still be loaded so you can have properties in both a local file and Zookeeper.
For all this to work you need to have spring-cloud-starter-zookeeper-config and curator-framework jar in your classpath with their dependancy, if you use maven you can add the following to your pom.xml
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-zookeeper-dependencies</artifactId>
<version>1.1.1.RELEASE</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zookeeper-config</artifactId>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-framework</artifactId>
</dependency>
</dependencies>