Spring AMQP with two ConnectionFactory - spring

I have an application with two ConnectionFactory (different brokers). They are configured with java classes:
       
       
#Bean
public ConnectionFactory ...
#Bean
public Queue ...
...
In rabbittemplate you can indicate the connection, but not in queues or in the exchanges, so they are being created in the two connections.
Do I have to use RabbitAdmin to create queues in only one of the two connections? is there any other way?

See the documentation : Conditional Declaration.
Starting with the 1.2 release, it is possible to conditionally declare these elements. This is particularly useful when an application connects to multiple brokers and needs to specify with which broker(s) a particular element should be declared.
You need a RabbitAdmin for each connection factory and use declared-by to indicate which admin(s) should declare each queue/exchange/binding.

Related

Issue with Spring Boot Gemfire Integration

I am currently working on a project which uses Spring Boot Apache Kafka and Gemfire Integration . In this project I have to subscribe the topic from the kafka and delete some matching keys from the Gemfire Region .
I am able to successfully subscribe the topic but while deleting the keys from the Gemfire region it throws No bean such exception when I try to delete from that region, from the Gemfire Configuration I am using #EnableClusterDefined Regions . The issue is that Spring has a weird behavior that it loads the gemfire regions after the spring application context is loaded . To overcome this I made a custom repository implementing the Application Context Aware Overridden the setApplicationContext and wrote a method getRegion where I am getting the region by context. getBean("Region Name") ... but still I am not able to load the required region bean. Can someone suggest something
Regarding...
The issue is that Spring has a weird behavior that it loads the GemFire Regions after the Spring ApplicationContext is loaded.
Technically (from here, to here, and finally, here), this happens after the ClientCache bean is initialized, not necessarily after the Spring ApplicationContext is (fully) loaded, or rather after the ContextRefreshEvent. It is an issue in your Spring application configuration.
The feature to which you are referring is from Spring Data for Apache Geode, or alternatively VMware Tanzu GemFire (SDG).
The feature is used by declaring the SDG #EnableClusterDefinedRegions annotation (Javadoc) in your Spring application configuration.
The behavior might seem "weird", but is in fact quite necessary.
PREREQUISITE KNOWLEDGE
With Spring configuration, regardless of source: [XML, JavaConfig, Groovy, Annotations or otherwise], there are 2 primary phases: parsing and initialization.
Spring uses a generic, common representation to model the configuration (i.e. BeanDefinition) for each bean defined, declared and managed by the Spring container when parsing the bean definition(s) from any configuration source. This model is then used to create the resolved beans during initialization.
Parsing allows the Spring container to determine (for one) the necessary dependencies between beans and the proper order of initialization on startup.
When using SDG's #EnableClusterDefinedRegions annotation, the GemFire/Geode client Spring application (a GemFire/Geode ClientCache application) must be connected to an existing GemFire/Geode cluster, where the Regions have already been defined, to create matching client-side Regions.
In order to connect to a cluster from the client, you would have to have defined (explicitly or implicitly) a connection (or connections) to the cluster using a GemFire/Geode Pool (Javadoc). This Pool (or Pools) is also registered as a bean in the Spring container by SDG.
The ClientCache or client Pool beans contain the metadata used to create connections to the cluster. The connections are necessary to perform Region data access operations, or even determine the Regions that need to be created on the client-side to be able to perform Region data access operations and persist data on the server-side in the first place.
All of this cannot happen until the client Pools are "initialized", thereby forming connections to the cluster where the necessary request can then be made to determine the Regions in the cluster. This is not unlike how the Gfsh list regions command works, in fact. Gfsh must be connected to execute the list regions command.
The main purpose of using SDG's #EnableClusterDefinedRegions annotation is so you do not have to explicitly define client-side ([CACHING_]PROXY) Regions that have already been determined by an (existing) cluster. It is for convenience. But, it doesn't mean there are no (implied) dependencies on the resulting (client) Region imposed by your Spring application that must be carefully considered and ordered.
Now...
I suspect your Spring application is using Spring for Apache Kafka (??) to define Kafka Topic subscriptions/listeners to receive messages? Somehow you loosely coupled the Kafka Topic listener receiving messages from the Kafka queue to the GemFire/Geode client Region.
The real question then is, how did you initially get a reference to the client Region from which you delete keys when an event is received from the Kafka topic?
You say that, "I am able to successfully subscribe the topic but while deleting the keys from the Gemfire region it throws No bean such exception when i try to delete from that region."
Do you mean the NoSuchBeanDefinitionException? This Exception is typically thrown on startup when using Spring container dependency injection, such as when defining a #KafkaListener as described here), like so:
#Component
class MyApplicationListeners {
#Autowired
#Qualifier("myRegion")
private Region<String, Object> clientRegion;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
clientRegion.remove(key);
}
}
However, when you specifically say, "..while deleting the keys from the GemFire Region..", would imply you were initially doing some sort of lookup (e.g. clientCache.getRegion(..)):
#Component
class MyApplicationListeners {
#Autowired
private ApplicationContext applicationContext;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
applicationContext.getBean("myRegion", Region.class).remove(key);
}
}
Not unlike to your attempted workaround using a ApplicationContextAware implementation.
At any rate, you definitely have a bean initialization ordering problem, and I am nearly certain it is caused by a loose coupling between the bean dependencies (not to be confused with "tight coupling in code").
Not knowing all your Spring application configuration details for sure, you can solve this 1 of several ways.
First, and the easiest and most explicit (obvious and recommended) way to solve this with an explicit Region bean definition matching the server-side Region on the client:
#Configuration
#EnableClusterDefinedRegions
class MyApplicationConfiguration {
#Bean("myRegion")
ClientRegionFactoryBean myRegion(ClientCache cache) {
ClientRegionFactoryBean myRegion = new ClientRegionFactoryBean();
myRegion.setCache(cache);
myRegion.setName("myRegion");
myRegion.setShortcut(ClientRegionShortcut.PROXY);
return myRegion;
}
// other declared application bean definitions
}
Then when the Region is injected by the Spring container in:
#Autowired
#Qualifier("myRegion")
private Region<String, Object> clientRegion;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
clientRegion.remove(key);
}
}
It will definitely exist!
SDG's #EnableClusterDefinedRegions is also careful not to stomp on explicit Region bean definitions if a Region bean is already defined (explicitly) in your Spring application configuration, as demonstrated above. Just be careful that the client Region (bean name) matches the server-side Region by "name".
Otherwise, you can play on the fact that the SDG framework attempts to early initialized client Regions from the cluster in the BeanPostProcessor by defining an "order", https://github.com/spring-projects/spring-data-geode/blob/2.7.1/spring-data-geode/src/main/java/org/springframework/data/gemfire/config/annotation/ClusterDefinedRegionsConfiguration.java#L90.
Then, you could simply do:
#Component
#Order(1)
class MyApplicationListeners {
#Autowired
#Qualifier("myRegion")
private Region<String, Object> clientRegion;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
clientRegion.remove(key);
}
}
Using the Spring Framework #Order annotation on the MyApplicationListeners class containing your Kafka Listener used to delete keys from the cluster/server Region using the client Region.
In this case, no explicit client-side Region bean definition is necessary.
Of course, other, maybe, non-obvious dependency on your MyApplicationListener class in your Spring application configuration could force an eager initialization of the MyApplicationListener class and you could potentially still hit a NoSuchBeanDefinitionException on startup during DI. In this case, the Spring container must respect dependency order and therefor overrides the #Order definition on the MyApplicationListener class (bean).
Still, you could also delay the reception of events from the Kafka queues subscriptions for all topics by setting autoStartup to false; see here. Then, you could subsequently listen for a Spring container, ContextRefreshedEvent to startup the Kafka Listener Container to start receiving events in your #KafkaListeners once the Spring application is properly initialized. Remember, all automatic client Region bean creation using the SDG #EnableClusterDefinedRegions annotation happens inside a BeanPostProcessor, and all BeanPostProcessers are called by the Spring container before the context is completely refreshed (i.e. the ContextRefreshedEvent). See the Spring Framework documentation for more details on BPPs.
Anyway, you have a lot of options.

Spring Boot Kafka Multiple Consumers with different properties configuration using appication.yml/properties

I have seen examples where we have a java configuration class and we define multiple KafkaListenerContainer and pass the required containerType to #kafkaListener. But i am exploring if there are any ways to achieve the same using Spring Boot auto Kafka configuration via appication.yml/properties.
No; Boot will only auto-configure one set of infrastructure; if you need multiple, you need to define them as beans.
However, with recent versions (since 2.3.4), you can add a listener container customizer to the factory so you can customize each listener container, even though they are created by the same factory; some properties can also be overridden on the #KafkaListener annotation itself.
Example:
#Component
class Customizer {
public Customizer(ConcurrentKafkaListenerContainerFactory<?, ?> factory) {
factory.setContainerCustomizer(container -> {
if (container.getContainerProperties().getGroupId().equals("slowGroup")) {
container.getContainerProperties().setIdleBetweenPolls(60_000);
}
});
}
}

Spring - When a Destination JndiObjectFactoryBean is cached, does it keep any connection open to the JMS broker?

We configure our JMS destinations via JNDI lookup as follows:
#Bean
JndiObjectFactoryBean myTopic(#Value("${topic}") String topic,
JndiTemplate jndiTemplate) {
JndiObjectFactoryBean jndiObjectFactoryBean = new JndiObjectFactoryBean();
jndiObjectFactoryBean.setJndiTemplate(jndiTemplate);
jndiObjectFactoryBean.setJndiName(topic);
return jndiObjectFactoryBean;
}
On initialisation of this bean, Spring confirms the object exists and caches it for use later. Does the caching of this Destination involve a persistent connection being created to our broker as well? Or is the connection only physically created when our CachingConnectionFactory is instantiated?
The (only and shared) connection is created when you call createConnection() for the 1st time on your CachingConnectionFactory instance and released on the call to destroy() or resetConnection() as stated per the contract (CachingConnectionFactory inherit from SingleConnectionFactory) :
A JMS ConnectionFactory adapter that returns the same Connection from all createConnection() calls, and ignores calls to Connection.close()

how to add several activemq NetworkConnectors in spring boot with java config, not configued with XML file

usually, we add NetworkConnectors configuration in activemq.xml before we start the activemq service as below:
<networkConnectors>
<networkConnector uri="static:(tcp://localhost:62001)"/>
</networkConnectors>
but this time, i just used spring boot with activemq embeded. and i want to configure more networkConnectors danymiclly when the mq running. so i could not choose to add these in activemq.xml. but need to configure with java code in spring boot. i don't know how to implement this.
you define the broker bean and add what you want as you do in xml
#Bean
public BrokerService broker() throws Exception {
BrokerService broker = new BrokerService();
broker.addConnector("tcp://localhost:5671");
broker.addNetworkConnector("static:(tcp://localhost:62001)");
return broker;
}

Spring boot activemq overriding the connection factory

I am new to Spring boot and i am trying to lookup my own connection factory instead of using the default 'ConnectionFactory' which Spring boot provides and also trying to lookup the already defined queue without using dynamicqueues.
How can i do that?
Should i add jndi.properties file and add it there so i can lookup?
Can someone suggest?
The Spring Integration configuration by default is looking for a
Spring Bean called ‘connectionFactory’.
Spring Boot by default,
creates the JMS connection factory using the name
‘jmsConnectionFactory’.
#Bean
public ConnectionFactory jmsConnectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost");
return connectionFactory;
}
https://github.com/spring-projects/spring-boot/blob/v1.5.9.RELEASE/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/jms/activemq/ActiveMQConnectionFactoryConfiguration.java

Resources