I am storing application.properties file in my config server. And my client applications are refering config server to download the property files.
Scenario 1:
When i change the value of property server.port in my config server. Can i reflect the changes in my client applicaiton without restarting the application.
You can use #RefreshScope beans for this purpose, this is not ideal but as close as you can get in config server, this is a pretty advanced thing after all.
So beans marked with this annotations will cause spring to clear the internal cache of the beans / configuration classes upon EnvironmentChangeEvent, then the instance of the bean will be created next time you'll try to call this bean.
To trigger such an event when the config server changes you can either explicitly call the actuator's refresh enpoint or develop your own solution that might be based on some messaging system so that the config server will be a producer of a "change" message and the consumer will be your application.
Now I can't say for sure whether it will work in particular with server.port, I've personally never seen a need to change this property, but for your custom beans this method will do the job.
Here is a good tutorial about this topic
Related
I wanted to run my Grails application (version 4.0.4) in a cluster. I tried to apply Hazelcast to replicate the HTTP session across the nodes/instances but somehow I couldn’t override/replace the SessionRepository bean that Grails uses with the Hazelcast implementation.
My working configuration in Spring Boot is: I declare the Hazelcast bean and annotate the Application with #EnableHazelcastHttpSession which in turn introduces the new SessionRepository from Hazelcast.
But I couldn’t make this configuration work in Grails and override the SessionRepository. (Although the app starts, it acts very strange.)
Any ideas?
Or do you suggest an alternative approach to implement a distributed session in Grails? How did you replicate session from your past experience?
(P.S The reason I chose Hazelcast is, since it is a distributed cache which can be embedded with the application itself, I can avoid dependency on external service such as Redis, to run the app. That is part of the requirement).
Thank you.
I am new to Spring-Integration.
My use case is:
Listen to a RabbitMQ queue/topic, get the message, process it, send it to other message broker (mostly it will be another RabbitMQ instance).
Expected load: 5000 messages/sec
In application.properties we can set configurations for one host.
How to use Spring Integration between two message brokers?
All the examples that i see are for one message broker. Any pointers to get started with two message brokers and Spring Integration.
Regards,
Mahesh
Since you mention an application.properties it sounds like you use Spring Boot with its auto-configuration feature. It is very important detail in your question because Spring Boot has opinion about auto-configuration and you really can have only one broker connection configuration auto-configured. If you would like to have an another similar in the same application, then you should forget that auto-configuration feature. You still can use the mentioned application.properties, but you have to manage them manually.
Since you talk about a RabbitMQ connection, so you need to exclude RabbitAutoConfiguration and manage all the required beans manually:
#SpringBootApplication(exclude = RabbitAutoConfiguration.class)
You still can use the #EnableConfigurationProperties(RabbitProperties.class) on some your #Configuration class to be able to inject that RabbitProperties and populate respective CachingConnectionFactory. For the second broker you can introduce your own #ConfigurationProperties or just configure everything manually reading properties via #Value. See more info about manual connection factory configuration in Spring AMQP reference manual: https://docs.spring.io/spring-amqp/docs/2.2.1.RELEASE/reference/html/#connections
Our java spring boot application is creating/declaring queues (if they do not exist) after successful connection to a certain exchange / topic.
Is it possible (from rabbitmq admin panel) to prohibit certain users (in this case the one used by this spring boot app) from creating/declaring a queue if it does no exist?
Thanks!
You can configure the permissions of the user a Spring-Boot app uses to connect to the broker.
This is achieved by supplying 3 regex (configuration, write, read), if you let the first one empty ("^$"), the user will not be able to delare any queue as mentionned in the complete documentation
You can also disable the RabbitAdmin bean by adding the following property to the app configuration file spring.rabbitmq.dynamic=false, so Spring will not try to declare anything.
I am using spring portlet mvc as my front end and connecting to some remote EJB running on a WAS. Now in my configuration file for the portlet where I specify the remote EJB lookup url, I have specified the url(s) as a cluster since the EJB is deployed in a clustered WAS. So the url looks like :iiop://server1:port,iiop://server2:port.
Now to save resources, spring mvc caches the initial context.Now I am noticing that spring is always able to make a connection to the remote ejb so long as one of the servers is up.
This is confusing to me since the cluster is resolved(for lack of a better word) at the time of lookup of the initial context and after that if the cluster member goes down, there should be a connection exception. So how does spring know when it should automatically refresh its initial context because the old context has become stale?
I found that in the applicationContext.xml file, there is the declaration:
<jee:remote-slsb id="remoteService" jndi-name="com.business.ejb.ServiceSLRemote" business-interface="com.business.ejb.ServiceSLRemote" cache-home="true" lookup-home-on-startup="false" resource-ref="false" refresh-home-on-connect-failure="true">
<jee:environment>
java.naming.factory.initial=${JAVA.NAMING.FACTORY.INITIAL}
java.naming.provider.url=${JAVA.NAMING.PROVIDER.URL}
</jee:environment>
</jee:remote-slsb>
It is the refresh-home-on-connect-failure="true" that tells the spring container that if the initial context has become stale, then it should refresh the connection. That is how it can work so long as one cluster member is active.
We have a spring application that publishes and listens to queues on a remote application server. My publisher and listener which are spring based listen within our own application server.
One of the problems we have for our test environments is the other app. server is not up so when our test application goes to start and it tries to inject JmsTemplate with its connectionFactory it blows up because it is not a valid connection and our entire application fails to load. This is causing grief with the other developers in our group that have nothing to do with JMS. All they want to do is run and test their code but the JmsTemplate connectionFactory is down.
Does anyone have any suggestion for enabling spring ignore some bad injections which will allow our application to start properly?
Thanks
I believe this could be achieved by defining separate spring profiles and then passing it as a parameter in your test environments while starting your application. You could mock or ignore any beans then.
Example
import org.springframework.context.annotation.Profile;
#Profile("test")
public class AppConfigTest {
....
....
}
JVM param / System property
-Dspring.profiles.active=test