Issue with Spring Boot Gemfire Integration - spring-boot

I am currently working on a project which uses Spring Boot Apache Kafka and Gemfire Integration . In this project I have to subscribe the topic from the kafka and delete some matching keys from the Gemfire Region .
I am able to successfully subscribe the topic but while deleting the keys from the Gemfire region it throws No bean such exception when I try to delete from that region, from the Gemfire Configuration I am using #EnableClusterDefined Regions . The issue is that Spring has a weird behavior that it loads the gemfire regions after the spring application context is loaded . To overcome this I made a custom repository implementing the Application Context Aware Overridden the setApplicationContext and wrote a method getRegion where I am getting the region by context. getBean("Region Name") ... but still I am not able to load the required region bean. Can someone suggest something

Regarding...
The issue is that Spring has a weird behavior that it loads the GemFire Regions after the Spring ApplicationContext is loaded.
Technically (from here, to here, and finally, here), this happens after the ClientCache bean is initialized, not necessarily after the Spring ApplicationContext is (fully) loaded, or rather after the ContextRefreshEvent. It is an issue in your Spring application configuration.
The feature to which you are referring is from Spring Data for Apache Geode, or alternatively VMware Tanzu GemFire (SDG).
The feature is used by declaring the SDG #EnableClusterDefinedRegions annotation (Javadoc) in your Spring application configuration.
The behavior might seem "weird", but is in fact quite necessary.
PREREQUISITE KNOWLEDGE
With Spring configuration, regardless of source: [XML, JavaConfig, Groovy, Annotations or otherwise], there are 2 primary phases: parsing and initialization.
Spring uses a generic, common representation to model the configuration (i.e. BeanDefinition) for each bean defined, declared and managed by the Spring container when parsing the bean definition(s) from any configuration source. This model is then used to create the resolved beans during initialization.
Parsing allows the Spring container to determine (for one) the necessary dependencies between beans and the proper order of initialization on startup.
When using SDG's #EnableClusterDefinedRegions annotation, the GemFire/Geode client Spring application (a GemFire/Geode ClientCache application) must be connected to an existing GemFire/Geode cluster, where the Regions have already been defined, to create matching client-side Regions.
In order to connect to a cluster from the client, you would have to have defined (explicitly or implicitly) a connection (or connections) to the cluster using a GemFire/Geode Pool (Javadoc). This Pool (or Pools) is also registered as a bean in the Spring container by SDG.
The ClientCache or client Pool beans contain the metadata used to create connections to the cluster. The connections are necessary to perform Region data access operations, or even determine the Regions that need to be created on the client-side to be able to perform Region data access operations and persist data on the server-side in the first place.
All of this cannot happen until the client Pools are "initialized", thereby forming connections to the cluster where the necessary request can then be made to determine the Regions in the cluster. This is not unlike how the Gfsh list regions command works, in fact. Gfsh must be connected to execute the list regions command.
The main purpose of using SDG's #EnableClusterDefinedRegions annotation is so you do not have to explicitly define client-side ([CACHING_]PROXY) Regions that have already been determined by an (existing) cluster. It is for convenience. But, it doesn't mean there are no (implied) dependencies on the resulting (client) Region imposed by your Spring application that must be carefully considered and ordered.
Now...
I suspect your Spring application is using Spring for Apache Kafka (??) to define Kafka Topic subscriptions/listeners to receive messages? Somehow you loosely coupled the Kafka Topic listener receiving messages from the Kafka queue to the GemFire/Geode client Region.
The real question then is, how did you initially get a reference to the client Region from which you delete keys when an event is received from the Kafka topic?
You say that, "I am able to successfully subscribe the topic but while deleting the keys from the Gemfire region it throws No bean such exception when i try to delete from that region."
Do you mean the NoSuchBeanDefinitionException? This Exception is typically thrown on startup when using Spring container dependency injection, such as when defining a #KafkaListener as described here), like so:
#Component
class MyApplicationListeners {
#Autowired
#Qualifier("myRegion")
private Region<String, Object> clientRegion;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
clientRegion.remove(key);
}
}
However, when you specifically say, "..while deleting the keys from the GemFire Region..", would imply you were initially doing some sort of lookup (e.g. clientCache.getRegion(..)):
#Component
class MyApplicationListeners {
#Autowired
private ApplicationContext applicationContext;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
applicationContext.getBean("myRegion", Region.class).remove(key);
}
}
Not unlike to your attempted workaround using a ApplicationContextAware implementation.
At any rate, you definitely have a bean initialization ordering problem, and I am nearly certain it is caused by a loose coupling between the bean dependencies (not to be confused with "tight coupling in code").
Not knowing all your Spring application configuration details for sure, you can solve this 1 of several ways.
First, and the easiest and most explicit (obvious and recommended) way to solve this with an explicit Region bean definition matching the server-side Region on the client:
#Configuration
#EnableClusterDefinedRegions
class MyApplicationConfiguration {
#Bean("myRegion")
ClientRegionFactoryBean myRegion(ClientCache cache) {
ClientRegionFactoryBean myRegion = new ClientRegionFactoryBean();
myRegion.setCache(cache);
myRegion.setName("myRegion");
myRegion.setShortcut(ClientRegionShortcut.PROXY);
return myRegion;
}
// other declared application bean definitions
}
Then when the Region is injected by the Spring container in:
#Autowired
#Qualifier("myRegion")
private Region<String, Object> clientRegion;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
clientRegion.remove(key);
}
}
It will definitely exist!
SDG's #EnableClusterDefinedRegions is also careful not to stomp on explicit Region bean definitions if a Region bean is already defined (explicitly) in your Spring application configuration, as demonstrated above. Just be careful that the client Region (bean name) matches the server-side Region by "name".
Otherwise, you can play on the fact that the SDG framework attempts to early initialized client Regions from the cluster in the BeanPostProcessor by defining an "order", https://github.com/spring-projects/spring-data-geode/blob/2.7.1/spring-data-geode/src/main/java/org/springframework/data/gemfire/config/annotation/ClusterDefinedRegionsConfiguration.java#L90.
Then, you could simply do:
#Component
#Order(1)
class MyApplicationListeners {
#Autowired
#Qualifier("myRegion")
private Region<String, Object> clientRegion;
#KafkaListener(id = "foo", topics = "myTopic")
public void listener(String key) {
clientRegion.remove(key);
}
}
Using the Spring Framework #Order annotation on the MyApplicationListeners class containing your Kafka Listener used to delete keys from the cluster/server Region using the client Region.
In this case, no explicit client-side Region bean definition is necessary.
Of course, other, maybe, non-obvious dependency on your MyApplicationListener class in your Spring application configuration could force an eager initialization of the MyApplicationListener class and you could potentially still hit a NoSuchBeanDefinitionException on startup during DI. In this case, the Spring container must respect dependency order and therefor overrides the #Order definition on the MyApplicationListener class (bean).
Still, you could also delay the reception of events from the Kafka queues subscriptions for all topics by setting autoStartup to false; see here. Then, you could subsequently listen for a Spring container, ContextRefreshedEvent to startup the Kafka Listener Container to start receiving events in your #KafkaListeners once the Spring application is properly initialized. Remember, all automatic client Region bean creation using the SDG #EnableClusterDefinedRegions annotation happens inside a BeanPostProcessor, and all BeanPostProcessers are called by the Spring container before the context is completely refreshed (i.e. the ContextRefreshedEvent). See the Spring Framework documentation for more details on BPPs.
Anyway, you have a lot of options.

Related

Spring boot: Instantiating Spring java configuration class

I am converting a Spring legacy application based on XML to Spring Boot with java based configuration.
My question is, is it possible to instantiate same bean class with different bean names?
Example:
#Configuration
public class HelloWorldConfig {
#Bean
public HelloWorld helloWorld(){
return new HelloWorld();
}
}
Can I have Spring boot instantiate above helloWorld bean using dynamically generated names like
helloWorld_1, helloWorld_2 etc in a loop? I want to control the naming of these beans. _1, _2 is something I will provide and these actually are IP Addresses.
All these beans are instances of same class HelloWorld.
Here is more context to what I am asking.
I am performing a distributed transactions on several data sources. These data sources can be anywhere from 2 to N. Now, I need to instantiate session factories, transaction managers and Dao impl classes one for each of these data sources. This is my usecase. I am not building a web application, but a console application. Just to be clear.
Really appreciate, if there is a way out for this problem.

ApplicationContext in Spring Boot [duplicate]

This question already has answers here:
application context. What is this?
(4 answers)
What is Application context and bean factory in spring framework [duplicate]
(1 answer)
Closed 3 years ago.
I have a Spring Boot app and it's running with Spring Data, MySQL, Spring Security and MVC. The app is running for me just as fine.
However, I keep hearing about ApplicationContext a lot and I was wondering when do I need to use it and would like to know what it does. Can someone give me an example and an overview of ApplicationContext and its use?
ApplicationContext is a core interface that Spring framework built on. If you're building a Spring application, you're already using the ApplicationContext. You can have great insight about this from Spring Framework Reference Documentation. As per this document, Spring framework consists with these modules;
The Context (spring-context) module builds on the solid base provided
by the Core and Beans modules: it is a means to access objects in a
framework-style manner that is similar to a JNDI registry. The Context
module inherits its features from the Beans module and adds support
for internationalization (using, for example, resource bundles), event
propagation, resource loading, and the transparent creation of
contexts by, for example, a Servlet container. The Context module also
supports Java EE features such as EJB, JMX, and basic remoting. The
ApplicationContext interface is the focal point of the Context module.
spring-context-support provides support for integrating common
third-party libraries into a Spring application context, in particular
for caching (EhCache, JCache) and scheduling (CommonJ, Quartz).
Spring ApplicationContext also inherits BeanFactory super-interface. So technically ApplicationContext is capable of doing all the things, BeanFactory interface is capable and much more. BeanFactory interface along with ApplicationContext provide the backbone of the Spring IoC container (Core container). Which is Bean management for your application.
The interface org.springframework.context.ApplicationContext
represents the Spring IoC container and is responsible for
instantiating, configuring, and assembling the aforementioned beans.
The container gets its instructions on what objects to instantiate,
configure, and assemble by reading configuration metadata. The
configuration metadata is represented in XML, Java annotations, or
Java code. It allows you to express the objects that compose your
application and the rich interdependencies between such objects.
ApplicationContext uses eager loading mechanism. So, every bean declared in your application, initialize right away after the application started and after, this ApplicationContext scope is pretty much read-only.
Initiate a Spring IoC container with custom bean definitions is pretty much staright forward.
ApplicationContext context = new ClassPathXmlApplicationContext(new String[] {"daos.xml"});
Following file shows this daos.xml file content;
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="accountDao"
class="org.springframework.samples.jpetstore.dao.jpa.JpaAccountDao">
<!-- additional collaborators and configuration for this bean go here -->
</bean>
<bean id="itemDao" class="org.springframework.samples.jpetstore.dao.jpa.JpaItemDao">
<!-- additional collaborators and configuration for this bean go here -->
</bean>
<!-- more bean definitions for data access objects go here -->
</beans>
After that you can access the beans define in the .xml like this;
JpaItemDao obj = (JpaItemDao) factory.getBean("itemDao");
These instances are now being initialized and managed by the ApplicationContext. But most users prefer to use #Bean annotation define beans to do the binding and #Autowired annotation to do the dependency injection. So, no need to manually feed a bean .xml to custom initialized ApplicationContext.
#Configuration
class SampleConfig {
#Bean
public JpaItemDao getJpaItemDao() {
return new JpaItemDao();
}
}
and inject in;
#Component
class SampleComponent {
#Autowired
private JpaItemDao itemDao;
public void doSomething() {
itemDao.save(); // Just an example.
}
}
Besided the bean management, ApplicationContext does some other important thing in the Spring core container. As per ApplicationContect javadoc, they are;
Bean factory methods for accessing application components. Inherited from ListableBeanFactory.
The ability to load file resources in a generic fashion. Inherited from the ResourceLoader interface.
The ability to publish events to registered listeners. Inherited from the ApplicationEventPublisher interface.
The ability to resolve messages, supporting internationalization. Inherited from the MessageSource interface.
Inheritance from a parent context. Definitions in a descendant context will always take priority. This means, for example, that a
single parent context can be used by an entire web application, while
each servlet has its own child context that is independent of that of
any other servlet.
Also, checkout the sub-interfaces of ApplicationContext that specifically designed for work on different use cases like WebApplicationContext.
ApplicationContext is a core concept (arguably the most important one) of spring used also in spring boot of course but and ideally hidden from the programmers in both cases, meaning that the programmer should not directly interact with it in a business code of the application.
Technically its an interface that has many implementations, the relevant one is picked depending on in which environment you're running and how do you configure the application.
So does it do? Its a class that basically is a registry of all the beans that spring has loaded. In general, starting up the spring mean finding the beans to load and putting them in the application context (this is relevant for singletons only, prototype-scopes beans are not stored in the ApplicationContext).
Why does spring need it?
For many reasons, to name a few:
To manage lifecyle of the beans (when the application shuts down, all beans that have a destroy method should be called)
To execute a proper injection. When some class A depends on class B spring should inject class B into class A. So by the time of creating the class A, class B should already be created, right? So spring goes like this:
Creates B
Puts B into application context (registry)
Creates A
For injection goals: gets B from the application context and injects into A
// an illustration for the second bullet
class B {}
class A {
#Autowired
B b;
}
Now there are other things implemented technically in application context:
Events
Resource Loading
Inheritance of application contexts (advanced stuff, actually)
However now you have an overview of what it is.
Spring boot application encapsulates the application context but you can still access it from many places:
in the main method when the application starts it returns an application context]
you can inject it into configuration if you really need
its also possible to inject the application context into the business bean, although we shouldn't really do so.
In simple words:
Spring is popular for dependency Injection.
So all the bean definitions(Objects) will be created by the spring and maintained in the container. So all the bean life cycle will be taken care by spring container.
So ApplicationContext is a interface It has different implementations will be there to initialize the spring container.
So ApplicationContext is the reference to Spring Container.
Some popular implementations are:
AnnotationConfigWebApplicationContext,
ClassPathXmlApplicationContext,
FileSystemXmlApplicationContext,
XmlWebApplicationContext.
Reference: https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/ApplicationContext.html

Invoking proxied DAO methods from Spring stand alone client :- could not initialize proxy - no Session

I have a third party jar in my class path which has some services and DAO's developed on top of Spring 2.0.6 and Hibernate3.2.6. I need to call some of the services and daos.
Using ClassPathXmlApplicationContext I'm able to load the application context and able to access the services and daos. Both the service and dao are following ProxyFactoryBean pattern.
Problem comes when I'm accessing a DAO which has some single valued associations.When I'm accessing associated entity I'm getting lazy initialization problem.
To solve this problem:- If it is in my own application JAR I'll be able to change the fetch type into join or in DAOImpl method I could use Hibernate.initialize().
Is there a way to avoid this problem from the stand alone code itself? Or any other way to solve this issue without modifying applicationContext.xml and DAOImpl
You need to put the caller method into one single transaction.
If you have Spring transactional environment, you can put the call of the DAO services/repositories in your own service/method which is marked as #Transactional, or if transaction support is not enabled, but you still have spring support in your application, you can just use TransactionTemplate directly, provided by spring
#Autowire
private PlatformTransactionManager txManager;
TransactionTemplate template = new TransactionTemplate(this.txManager);
template.execute( new TransactionCallback<Object>(){
public void doInTransaction(TransactionStatus status){
// work done here will be wrapped by a transaction and committed.
// status.setRollbackOnly(true) is called or an exception is thrown
}
});
Otherwise you have manually handle transactionality by your own , depending on the technologies your app is using.

In-memory structure in Spring

I'm a Spring novice user.
I have a database table which is static in nature and contains only a few records.I want a Map-like structure(id - name) that holds two columns of this table. This Map must be loaded/initialized when the web application is started and must be applicable throughout the application's context, independent of the users sessions and must be read-only. This way, I can save a lot of DB queries as the different operations will simply read from this Map.
While I'm aware of ServletContextListener etc. of Java EE, I don't know how to achieve the same in Spring. Is a Spring Service bean the right place/way to initialize and store such a Map?
Please guide me about the same.
You can create a regular spring bean exposing a method which loads the data you require from the database and stores it in your map. Annotate this method with #PostConstruct and spring will ensure that it is called when your application context starts, hence loading your map.
You could use springs JdbcTemplate to load your data within this method
See Spring PostConstruct doco for information on the #PostConstruct annotation
See JdbcTemplate doco for information on JdbcTemplate
You can configure lists, sets and maps in a Spring XML configuration. See here for more examples.

How do you use Spring Data JPA outside of a Spring Container?

I'm trying to wire up Spring Data JPA objects manually so that I can generate DAO proxies (aka Repositories) - without using a Spring bean container.
Inevitably, I will be asked why I want to do this: it is because our project is already using Google Guice (and on the UI using Gin with GWT), and we don't want to maintain another IoC container configuration, or pull in all the resulting dependencies. I know we might be able to use Guice's SpringIntegration, but this would be a last resort.
It seems that everything is available to wire the objects up manually, but since it's not well documented, I'm having a difficult time.
According to the Spring Data user's guide, using repository factories standalone is possible. Unfortunately, the example shows RepositoryFactorySupport which is an abstract class. After some searching I managed to find JpaRepositoryFactory
JpaRepositoryFactory actually works fairly well, except it does not automatically create transactions. Transactions must be managed manually, or nothing will get persisted to the database:
entityManager.getTransaction().begin();
repositoryInstance.save(someJpaObject);
entityManager.getTransaction().commit();
The problem turned out to be that #Transactional annotations are not used automatically, and need the help of a TransactionInterceptor
Thankfully, the JpaRepositoryFactory can take a callback to add more AOP advice to the generated Repository proxy before returning:
final JpaTransactionManager xactManager = new JpaTransactionManager(emf);
final JpaRepositoryFactory factory = new JpaRepositoryFactory(emf.createEntityManager());
factory.addRepositoryProxyPostProcessor(new RepositoryProxyPostProcessor() {
#Override
public void postProcess(ProxyFactory factory) {
factory.addAdvice(new TransactionInterceptor(xactManager, new AnnotationTransactionAttributeSource()));
}
});
This is where things are not working out so well. Stepping through the debugger in the code, the TransactionInterceptor is indeed creating a transaction - but on the wrong EntityManager. Spring manages the active EntityManager by looking at the currently executing thread. The TransactionInterceptor does this and sees there is no active EntityManager bound to the thread, and decides to create a new one.
However, this new EntityManager is not the same instance that was created and passed into the JpaRepositoryFactory constructor, which requires an EntityManager. The question is, how do I make the TransactionInterceptor and the JpaRepositoryFactory use the same EntityManager?
Update:
While writing this up, I found out how to solve the problem but it still may not be the ideal solution. I will post this solution as a separate answer. I would be happy to hear any suggestions on a better way to use Spring Data JPA standalone than how I've solve it.
The general principle behind the design of JpaRepositoryFactory and the according Spring integration JpaRepositoryFactory bean is the following:
We're assuming you run your application inside a managed JPA runtime environment, not caring about which one.
That's the reason we rely on injected EntityManager rather than an EntityManagerFactory. By definition the EntityManager is not thread safe. So if dealt with an EntityManagerFactory directly we would have to rewrite all the resource managing code a managed runtime environment (just like Spring or EJB) would provide you.
To integrate with the Spring transaction management we use Spring's SharedEntityManagerCreator that actually does the transaction resource binding magic you've implemented manually. So you probably want to use that one to create EntityManager instances from your EntityManagerFactory. If you want to activate the transactionality at the repository beans directly (so that a call to e.g. repo.save(…) creates a transaction if none is already active) have a look at the TransactionalRepositoryProxyPostProcessor implementation in Spring Data Commons. It actually activates transactions when Spring Data repositories are used directly (e.g. for repo.save(…)) and slightly customizes the transaction configuration lookup to prefer interfaces over implementation classes to allow repository interfaces to override transaction configuration defined in SimpleJpaRepository.
I solved this by manually binding the EntityManager and EntityManagerFactory to the executing thread, before creating repositories with the JpaRepositoryFactory. This is accomplished using the TransactionSynchronizationManager.bindResource method:
emf = Persistence.createEntityManagerFactory("com.foo.model", properties);
em = emf.createEntityManager();
// Create your transaction manager and RespositoryFactory
final JpaTransactionManager xactManager = new JpaTransactionManager(emf);
final JpaRepositoryFactory factory = new JpaRepositoryFactory(em);
// Make sure calls to the repository instance are intercepted for annotated transactions
factory.addRepositoryProxyPostProcessor(new RepositoryProxyPostProcessor() {
#Override
public void postProcess(ProxyFactory factory) {
factory.addAdvice(new TransactionInterceptor(xactManager, new MatchAlwaysTransactionAttributeSource()));
}
});
// Create your repository proxy instance
FooRepository repository = factory.getRepository(FooRepository.class);
// Bind the same EntityManger used to create the Repository to the thread
TransactionSynchronizationManager.bindResource(emf, new EntityManagerHolder(em));
try{
repository.save(someInstance); // Done in a transaction using 1 EntityManger
} finally {
// Make sure to unbind when done with the repository instance
TransactionSynchronizationManager.unbindResource(getEntityManagerFactory());
}
There must be be a better way though. It seems strange that the RepositoryFactory was designed to use EnitiyManager instead of an EntityManagerFactory. I would expect, that it would first look to see if an EntityManger is bound to the thread and then either create a new one and bind it, or use an existing one.
Basically, I would want to inject the repository proxies, and expect on every call they internally create a new EntityManager, so that calls are thread safe.

Resources