#RetryableTopic without #KafkaListener - spring-boot

I would like to use the functionalities offered by the #RetryableTopic annotation but
I do not have any #KafkaListener annotation.
I use the method createContainer(final String... topics) from ConcurrentKafkaListenerContainerFactory<String, Serializable> and setup a listener using
setupMessageListener from the obtained container.
Do you know if it is possible to use the RetryableTopic in case of dynamic container creation ?

The feature doesn't currently support creating retryable topics outside of the #KafkaListener scope.
Feel free to raise a feature request in the project's GitHub so this can be kept in mind for Spring Kafka's next major version due later this year.
Thanks

Related

Issue with Spring Boot Gemfire Integration

I am currently working on a project which uses Spring Boot Apache Kafka and Gemfire Integration . In this project I have to subscribe the topic from the kafka and delete some matching keys from the Gemfire Region .
I am able to successfully subscribe the topic but while deleting the keys from the Gemfire region it throws No bean such exception when I try to delete from that region, from the Gemfire Configuration I am using #EnableClusterDefined Regions . The issue is that Spring has a weird behavior that it loads the gemfire regions after the spring application context is loaded . To overcome this I made a custom repository implementing the Application Context Aware Overridden the setApplicationContext and wrote a method getRegion where I am getting the region by context. getBean("Region Name") ... but still I am not able to load the required region bean. Can someone suggest something
Regarding...
The issue is that Spring has a weird behavior that it loads the GemFire Regions after the Spring ApplicationContext is loaded.
Technically (from here, to here, and finally, here), this happens after the ClientCache bean is initialized, not necessarily after the Spring ApplicationContext is (fully) loaded, or rather after the ContextRefreshEvent. It is an issue in your Spring application configuration.
The feature to which you are referring is from Spring Data for Apache Geode, or alternatively VMware Tanzu GemFire (SDG).
The feature is used by declaring the SDG #EnableClusterDefinedRegions annotation (Javadoc) in your Spring application configuration.
The behavior might seem "weird", but is in fact quite necessary.
PREREQUISITE KNOWLEDGE
With Spring configuration, regardless of source: [XML, JavaConfig, Groovy, Annotations or otherwise], there are 2 primary phases: parsing and initialization.
Spring uses a generic, common representation to model the configuration (i.e. BeanDefinition) for each bean defined, declared and managed by the Spring container when parsing the bean definition(s) from any configuration source. This model is then used to create the resolved beans during initialization.
Parsing allows the Spring container to determine (for one) the necessary dependencies between beans and the proper order of initialization on startup.
When using SDG's #EnableClusterDefinedRegions annotation, the GemFire/Geode client Spring application (a GemFire/Geode ClientCache application) must be connected to an existing GemFire/Geode cluster, where the Regions have already been defined, to create matching client-side Regions.
In order to connect to a cluster from the client, you would have to have defined (explicitly or implicitly) a connection (or connections) to the cluster using a GemFire/Geode Pool (Javadoc). This Pool (or Pools) is also registered as a bean in the Spring container by SDG.
The ClientCache or client Pool beans contain the metadata used to create connections to the cluster. The connections are necessary to perform Region data access operations, or even determine the Regions that need to be created on the client-side to be able to perform Region data access operations and persist data on the server-side in the first place.
All of this cannot happen until the client Pools are "initialized", thereby forming connections to the cluster where the necessary request can then be made to determine the Regions in the cluster. This is not unlike how the Gfsh list regions command works, in fact. Gfsh must be connected to execute the list regions command.
The main purpose of using SDG's #EnableClusterDefinedRegions annotation is so you do not have to explicitly define client-side ([CACHING_]PROXY) Regions that have already been determined by an (existing) cluster. It is for convenience. But, it doesn't mean there are no (implied) dependencies on the resulting (client) Region imposed by your Spring application that must be carefully considered and ordered.
Now...
I suspect your Spring application is using Spring for Apache Kafka (??) to define Kafka Topic subscriptions/listeners to receive messages? Somehow you loosely coupled the Kafka Topic listener receiving messages from the Kafka queue to the GemFire/Geode client Region.
The real question then is, how did you initially get a reference to the client Region from which you delete keys when an event is received from the Kafka topic?
You say that, "I am able to successfully subscribe the topic but while deleting the keys from the Gemfire region it throws No bean such exception when i try to delete from that region."
Do you mean the NoSuchBeanDefinitionException? This Exception is typically thrown on startup when using Spring container dependency injection, such as when defining a #KafkaListener as described here), like so:
#Component
class MyApplicationListeners {
#Autowired
#Qualifier("myRegion")
private Region<String, Object> clientRegion;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
clientRegion.remove(key);
}
}
However, when you specifically say, "..while deleting the keys from the GemFire Region..", would imply you were initially doing some sort of lookup (e.g. clientCache.getRegion(..)):
#Component
class MyApplicationListeners {
#Autowired
private ApplicationContext applicationContext;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
applicationContext.getBean("myRegion", Region.class).remove(key);
}
}
Not unlike to your attempted workaround using a ApplicationContextAware implementation.
At any rate, you definitely have a bean initialization ordering problem, and I am nearly certain it is caused by a loose coupling between the bean dependencies (not to be confused with "tight coupling in code").
Not knowing all your Spring application configuration details for sure, you can solve this 1 of several ways.
First, and the easiest and most explicit (obvious and recommended) way to solve this with an explicit Region bean definition matching the server-side Region on the client:
#Configuration
#EnableClusterDefinedRegions
class MyApplicationConfiguration {
#Bean("myRegion")
ClientRegionFactoryBean myRegion(ClientCache cache) {
ClientRegionFactoryBean myRegion = new ClientRegionFactoryBean();
myRegion.setCache(cache);
myRegion.setName("myRegion");
myRegion.setShortcut(ClientRegionShortcut.PROXY);
return myRegion;
}
// other declared application bean definitions
}
Then when the Region is injected by the Spring container in:
#Autowired
#Qualifier("myRegion")
private Region<String, Object> clientRegion;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
clientRegion.remove(key);
}
}
It will definitely exist!
SDG's #EnableClusterDefinedRegions is also careful not to stomp on explicit Region bean definitions if a Region bean is already defined (explicitly) in your Spring application configuration, as demonstrated above. Just be careful that the client Region (bean name) matches the server-side Region by "name".
Otherwise, you can play on the fact that the SDG framework attempts to early initialized client Regions from the cluster in the BeanPostProcessor by defining an "order", https://github.com/spring-projects/spring-data-geode/blob/2.7.1/spring-data-geode/src/main/java/org/springframework/data/gemfire/config/annotation/ClusterDefinedRegionsConfiguration.java#L90.
Then, you could simply do:
#Component
#Order(1)
class MyApplicationListeners {
#Autowired
#Qualifier("myRegion")
private Region<String, Object> clientRegion;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
clientRegion.remove(key);
}
}
Using the Spring Framework #Order annotation on the MyApplicationListeners class containing your Kafka Listener used to delete keys from the cluster/server Region using the client Region.
In this case, no explicit client-side Region bean definition is necessary.
Of course, other, maybe, non-obvious dependency on your MyApplicationListener class in your Spring application configuration could force an eager initialization of the MyApplicationListener class and you could potentially still hit a NoSuchBeanDefinitionException on startup during DI. In this case, the Spring container must respect dependency order and therefor overrides the #Order definition on the MyApplicationListener class (bean).
Still, you could also delay the reception of events from the Kafka queues subscriptions for all topics by setting autoStartup to false; see here. Then, you could subsequently listen for a Spring container, ContextRefreshedEvent to startup the Kafka Listener Container to start receiving events in your #KafkaListeners once the Spring application is properly initialized. Remember, all automatic client Region bean creation using the SDG #EnableClusterDefinedRegions annotation happens inside a BeanPostProcessor, and all BeanPostProcessers are called by the Spring container before the context is completely refreshed (i.e. the ContextRefreshedEvent). See the Spring Framework documentation for more details on BPPs.
Anyway, you have a lot of options.

Spring integration : does SpringApplicationContext call #ServiceActivator in config?

I am reading Spring integration source code, and I have some questions understanding the workflow:
Does the #SpringBootApplication class, when calling application.run(), will call directly beans annotated using #ServiceActivator ? For example in my config file I have :
#Bean
#ServiceActivator(inputChannel = test)
public MessageHandler myHandler() {
return new SomeHandler();
}
when the application.run() is fired, the method handleRequestMessage() of SomeHandler will be called ? Am I understanding it right ?
Well, you need to understand that there are two parts of this matter:
Configuration phase when Spring parses all the annotations to register beans in the AppplicaitonContext. This way even that #ServiceActivator is consulted and a event-driven endpoint is registered as a bean as well.
Runtime part of Spring Integration environment. Here the mentioned endpoint is subscribed to the inputChannel and when message is has arrived the handleRequestMessage() is triggered from that SomeHandler. That's why it is called "service activator".
You probably need to make yourself familiar with EIP first of all: https://www.enterpriseintegrationpatterns.com/ to understand what is messaging and why there are endpoints and channels in between. Then you go to Spring Integration docs: https://docs.spring.io/spring-integration/docs/current/reference/html/index.html and realize for yourself that this framework provides a bunch of out-of-the-box components for this or that EIP which may be registered automaticaly in the application context by just declaring some annotation.

Scenario when we may be needing #Configurable in spring?

I have question about the need of using #configurable. I have gone through the blog that explains how to use #configurable. But the question that comes to my mind is, what can be the scenario when we need to use #configurable. I can think of two scenarios where it can be useful
In a legacy project, when we are already making any bean with new operator and we want to make it spring managed.
In a new project, we want to enforce that even if developer makes the bean with new operator, still it is spring managed.
Otherwise for new beans we can always declare them in applicationContext.xml and I do not see any need to declare them #configurable.
Please let me know if above understanding is correct or if I am missing something.
UPDATE:- Basically as per my understanding configurable is generally used to inject dependency when creating the object with new operator. But why would i be creating the object with new operator when i am using spring
#Configurable annotation is meant for injecting dependencies in domain-driven applications. That means, in such applications, the domain objects interact with each other to perform a certain operation.
Take the following example:
In an invoicing application, the Invoice class provides a constructor to create it, then it has methods to validate, and finally persist it. Now, to persist the invoice, you need a DAO implementation available within the invoice. This is a dependency you would like to be injected or located. With Spring's #Configurable, whenever an invoice is created using the new operator, the appropriate DAO implementation will get injected and can be used for all persist operations.
I had a more realtime scenario where I used #Configurable annotation as described here.

spring using CGLIB proxy even when class implements interface

I'm trying to use Spring AOP to intercept methods of my GWT-RPC application (using GWT-Server library, so RPC service doesn't extend RemoteServiceServlet). When I deploy my war to tomcat and start the application, CGLIB fails for some reason. But I don't understand why CGLIB is being used for proxying at the first place. Since my RPC class implements the interface, shouldn't it be using JDK dynamic proxies?
Is there anything I need to do to debug this issue? Kindly advise.
Note: FYI, Spring encounters this exception, but I believe that's a different problem, I'm unable to understand why CGLIB proxy is in the picture.
Caused by: net.sf.cglib.core.CodeGenerationException: net.sf.ehcache.CacheException-->Another unnamed CacheManager already exists
in the same VM. Please provide unique names for each CacheManager in the config
or do one of following:
1. Use one of the CacheManager.create() static factory methods to reuse same CacheManager with same name or create one if necessary
2. Shutdown the earlier cacheManager before creating new one with same name.
Answering for the sake of other (rare) folks who might do the same mistake.
The aspect setup for spring AOP wasn't correct and was in fact trying to target almost all the classes in the context, which is why EhCache started causing problems as there were more than one CacheManager instances (because of CGLIB proxies as CacheManager doesn't implement an interface)

Using Spring AOP with Quartz scheduler

I am using Quartz scheduler for scheduling purposes in my project. I need to gather statistics like when, for how long, and how many times a job was run. I want to use Spring AOP for the same. For this, I am making Job classes spring-managed beans. Spring creates a Proxy class for each of the Job classes. But now when Quartz tries to execute this spring-managed Job, I am getting InstantiationException for the Proxy class created for the Job by Spring.
org.quartz.SchedulerException: Problem instantiating class '$Proxy6'
[See nested exception: java.lang.InstantiationException: $Proxy6]
Can anybody please suggest a solution for this problem?
If you use quarz directly (not via Spring Schedule annotation), you can ask quarz directly for the statistics. -- Many of them are already implemented in quarz.
Because Quartz Job class is managed by Quartz container not Spring container, Spring AOP can not achieve your goal. For your purpose, there are 2 ways that you can work on this:
Quartz has listener mechanism builtin, you can use a global listener to do want you want, as the AOP works. For more information about listener, refer to: Quartz document.
If you insist on Spring AOP, you have to customize the job class instantiation process, so that the job class is managed by Spring Container. One approach is write your own JobFactory, which extends SpringBeanJobFactory then override the createJobInstance() method. If you want more things about this, please comment on this, I will write more detail on this.

Resources