JaversSqlAutoConfiguration.javers() not getting called - spring

Context
I have used #TypeName("Employee") for my entities so I can lose the fully qualified TypeName in DB. It works as expected.
Issue
When the Spring boot application is restarted and there are existing audit logs, I get TYPE_NAME_NOT_FOUND exception when I hit javers.findSnapshots()
org.javers.common.exception.JaversException: TYPE_NAME_NOT_FOUND type name 'Employment' not found. If you are using #TypeName annotation, remember to register this class using JaversBuilder.scanTypeName(Class). See also https://github.com/javers/javers/issues/263
My Approach so far
I have added JaversSqlAutoConfiguration.java, call it MyJaversSqlAutoConfiguration.
I then added scanTypeName(Employee.class) in MyJaversSqlAutoConfiguration.javers().
Observation
I noticed MyJaversSqlAutoConfiguration.javers(connectionProvider) doesn't get hit. However, org.javers.spring.boot.sql.JaversSqlAutoConfiguration.javers() gets hit in debug mode. commitPropertiesProvider() and springSecurityAuthorProvider() in MyJaversSqlAutoConfiguration' gets hit. But notMyJaversSqlAutoConfiguration.javers(ConnectionProvider connectionProvider)`.
Upon closer inspection, I found that org.javers.spring.boot.sql.JaversSqlAutoConfiguration.javers() doesn't have #ConditionalOnMissingBean but commitPropertiesProvider() and springSecurityAuthorProvider() do.
Question
Is there a working example of this scanTypeName() somewhere or should we add #ConditionalOnMissingBean?

Looks like #ConditionalOnMissingBean is missing in javers bean defintions (in both javers-spring-boot-starters).
It can be added to JaversSqlAutoConfiguration.java and JaversMongoAutoConfiguration.java
#Bean(name = "javers")
#ConditionalOnMissingBean
public Javers javers(ConnectionProvider connectionProvider) {
...
If you contribute a PR, we will merge it.

Related

Spring Boot & Vault: incomplete context initialization issue

I've faced an issue that on rare occasions (it might take dozens of restarts) Spring doesn't initialize all properties correctly.
I define the bean of CbKafkaConsumerConfig (my custom bean) type and check its state in the thread that was created by a method that is marked as #EventListener(ApplicationReadyEvent.class), so I expect it to be completely initialized by this point. However, this is what I see:
Values that I expected to be filled are left with placeholders.
Here's how they are defined in application.properties file. (And I've checked the spelling - it's correct, otherwise it would fail every time, not occasionally)
config-bean-prefix.msg-topics=${cb.kafka.tc-topic}
config-bean-prefix.unexpected-error-topic=${cb.kafka.unexpected-errors-topic}
These properties are defined in Vault and I expected them to be fetched and set with the power of Spring Cloud Vault. Here you can see that Vault is present as a property source AND that these properties are populated there.
At the same time, in the context there are other beans of the same type CbKafkaConsumerConfig that are referring to these properties and yet it resolved fine for them.
Here's how the bean is defined
#Bean({"myBean"})
#ConfigurationProperties(
prefix = "config-bean-prefix"
)
public CbKafkaConsumerConfig myBeanConsumer() {
return new CbKafkaConsumerConfig();
}
And the bean itself:
#Data
public class CbKafkaConsumerConfig extends CbKafkaBaseConfig {
#NotNull
#Size(
min = 1
)
private Collection<String> msgTopics;
#NotNull
private String unExpectedErrorTopic;
}
We're using Spring Boot 2.2.x however this issue is also present for Spring Boot 2.1.x.
It's not specific for this type of beans, other might fail as well while being correctly set in Vault. What could be the reason of such unpredictable behavior and what I should look into?
Turns out by default spring cloud vault is not simply fetching properties on start, every so often it's updating them. While updating, there's a short time window when properties were already deleted from property source in the context, but not filled with the new ones and it might actually happen during context initialization (super questionable behavior in my opinion) causing some beans being corrupted.
If you don't want properties to be updated in runtime just set spring.cloud.vault.config.lifecycle.enabled to false

Gemfire NoSuchBeanDefinitionException Autowiring Cache (Spring 5.0.2 / Gemfirev9.2.7)

We are migrating from Gemfire 8.2.7 to 9.2.1
As part of Gemfire startup, we leverage SpringContextBootstrappingInitializer to initialize the spring-beans which #Autowire the Cache.
The same code when migrated to Gemfire 9.2.1 (along with the other stack) is failing on server startup with below error.
Gemfire 8.2.7 --> Gemfire 9.2.1
Spring-data-Gemfire 1.8.4 --> 2.0.2
Spring-Boot 1.4.7 --> 2.0.0.M7
Spring --> 5.0.2
Caused by:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
qualifying bean of type 'org.apache.geode.cache.Cache' available:
expected at least 1 bean which qualifies as autowire candidate.
Dependency annotations:
{#org.springframework.beans.factory.annotation.Autowired(required=true)}
Any pointers / changes required for GemfireConfig? Below is our JavaConfig.
#Bean
public CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
Looks like the ComponentScan is kicking in prior to Configuration processor. Any idea on controlling this behavior? This was lasted tested to work in Spring-Boot 1.4.6 (Spring- 4.3.8) and gets resolved with a #Depends option - but just wanted to understand if there are any fundamental changes with the ordering of bean initialization with newer Spring version.
#Configuration
#EnableAutoConfiguration(exclude = { HibernateJpaAutoConfiguration.class, BatchAutoConfiguration.class })
#Import(value = { GemfireServerConfig.class, JpaConfiguration.class, JpaConfigurableProperties.class })
#ComponentScan(basePackages = "com.test.gemfire", excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class) )
To begin, let me give you some tips since there are 3 issues with your problem statement above...
1) First, you have not made it clear why or how you are using the o.s.d.g.support.SpringContextBootstrappingInitializer Docs here.
I can only assume it is because you are launching your GemFire servers with Gfsh
using the following command...
gfsh> start server --name=MyServer --cache-xml-file=/path/to/cache.xml ...
Where your cache.xml is defined similar to this. After all, this was the original intent for using the SpringContextBootstrappingInitializer.
If this is the case, why not use the Gfsh, start server command, --spring-xml-location option instead. For example:
gfsh> start server --name=MyServer --spring-xml-location=/by/default/a/classpath/to/applicationContext.xml --classpath=/path/to/spring-data-gemfire-2.0.2.RELEASE.jar:...
By doing so, you no longer need to provide cache.xml just to declare the SpringContextBootstrappingInitializer in order to bootstrap a Spring container inside the GemFire JVM process. You can simply use the --spring-xml-location option and put SDG on the server's classpath when starting the server.
2) Second, it is not apparent what type of application component/bean you are injecting a GemFire Cache reference into (e.g. a Region or another application component class, like a DAO, etc). Providing a snippet of code showing how you injected the Cache reference, i.e. the injection point using the #Autowired annotation would have been helpful. For example:
#Service
class MyService {
#Autowired
private Cache gemfireCache;
...
}
3) #2 would have been more apparent if you included the full stack trace rather than just the NoSuchBeanDefinitionException message.
Despite the issues with your problem statement, I can infer the following:
Clearly, you are using "classpath component scanning" (with the #ComponentScan annotation) and are auto-wiring "by type"; which maybe key actually; I will come back to this later below.
You are using Spring's #Autowired annotation on a bean class field (field injection) or property (setter injection), maybe even a constructor.
The type of this field/property (or constructor parameter) is definitely org.apache.geode.cache.Cache.
Moving on...
In general, Spring will follow dependency order first and foremost. That is, if A depends on B, then B must be created before and destroyed after A. Typically, Spring will and can honor this without incident.
Beyond "dependency order" bean creation and satisfying dependencies between beans (including with the #DependsOn annotation), the order of bean creation is pretty loosely defined.
There are several factors that can influence it, such as "registration order" (i.e. the order in which bean definitions are declared, which is particularly true for beans defined in XML), "import order" (when using the #Import annotation on #Configuration classes), Java reflection (includes #Bean definitions declared in #Configuration classes), etc. Configuration organization is definitely important and should not be taken lightly.
This is 1 reason why I am not a big proponent of "classpath component scanning. While it may be convenient, it is always better, IMO, to be more "explicit" in your configuration, and the organization of your configuration, for reasons outlined here in addition to other non-apparent limitations. At worst, you should definitely be limiting the scope of the scan.
Ironically, you excluded/filtered the 1 thing that could actually help your organizational concerns... components of type #Configuration:
... excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class)
NOTE: given the exclusion, are you certain you did not exclude the the 1 #Configuration class containing your CacheFactoryBean definition? I suppose not since you say this worked after including the #DependsOn annotation.
Clearly there is a dependency defined between some application component of yours (??) and a bean of type o.a.g.cache.Cache (using #Autowired), yet Spring is failing to resolve it.
My thinking is, Spring cannot resolve the Cache dependency because 1) the GemFire cache bean either has not been created yet and 2) Spring cannot find an appropriate bean definition of the desired type (i.e. o.a.g.cache.Cache) in your configuration that would resolve the dependency and force the GemFire Cache to be created first, or 3) the GemFire Cache bean has been created first but Spring is unable to resolve the type as o.a.g.cache.Cache.
I have encountered both scenarios before and it is not exactly clear to me when each scenario happens because I simply have not traced this through yet. I have simply corrected it and moved on. I have noticed that it is version related though.
There are several ways to solve this problem.
If the problem is the later, 3), then simply declaring your dependency as type o.a.g.cache.GemFireCache should resolve the problem. So, for example:
#Repository
class MyDataAccessObject {
#Autowired
private GemFireCache gemfireCache;
...
}
The reason for this is because the o.s.d.g.CacheFactoryBean class's getObjectType() method returns a Class type generically extending o.a.g.cache.GemFireCache. This was by design since o.s.d.g.client.ClientCacheFactoryBean extends o.s.d.g.CacheFactoryBean, though I probably would not have done it that way if I had created these classes. However, it is consistent with the fact that the actual cache type in GemFire is o.a.g.internal.cache.GemFireCacheImpl which indirectly implements both the o.a.g.cache.Cache interface as well as the o.a.g.cache.client.ClientCache interface.
If your problem is the former (1) + 2), which is a bit trickier), then I would suggest you employ a smarter organization of your configuration, separated by concern. For example, you can encapsulate your GemFire configuration with:
#Configuration
class GemFireConfiguration {
// define GemFire components (e.g. CacheFactoryBean) here
}
Then, your application components, where some are dependent on GemFire components, can be defined with:
#Configuration
#Import(GemFireConfiguration.class)
class ApplicationConfiguration {
// define application beans, including beans dependent on GemFire components
}
By importing the GemFireConfiguration you are ensuring the GemFire components/beans are created (instantiated, configured and initialized) first.
You can even employ more targeted, limited "classpath component scanning" at the ApplicationConfiguration class-level in cases where you have a large number of application components (services, DAO, etc).
Then, you can have your main, Spring Boot application class drive all this:
#Configuration
#Import(ApplicationConfiguration.class)
class MySpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(MySpringBootApplication.class, args);
}
}
The point is, you can be as granular as you choose. I like to encapsulate configuration by concern and clearly organize the configuration (using imports) to reflect the order in which I want my components created (constructed, configured and initialized).
Honestly, I basically organize my configuration in the order of dependencies. If my application ultimately depends on a data store and cannot function without that data store, then it makes since to ensure that is initialized first, otherwise, what is the point of starting the application.
Finally, you can always rely on the #DependsOn annotation, as you have appropriately done, to ensure that Spring will create the component before the component that expects it.
Based on the fact that the #DependsOn annotation solved your problem, then I would say this is an organizational problem and falls under the 1) / 2) category I outlined above.
I am going to dig into this a bit deeper and respond to my answer in comments with what I find.
Hope this helps!
-John

Spring autowiring based on service availability

I have a need of conditionally creating one of three possible implementations of a service depending upon the environment detected by a Spring application at runtime. If Service A is available, then I want to create a concrete implementation class that uses Service A as a dependency. If Service A is not available, then I want to create an implementation using Service B as a dependency. And so-on.
Classes which depend on the implementation will Autowire the Interface and not care what the underlying Service was that got selected for the particular environment.
My first stab at this was to implement multiple #Bean methods which either return a bean or null, depending on whether the Service is available, and to then have a separate #Configuration class which #Autowire(required=false) the two possible services, conditionally creating the implementation depending on which of the #Autowired fields was not-null.
The problem here is that when required=false, Spring doesn't appear to care whether it waits around for candidates to be constructed; that is to say, the class which tries to pick the implementation might be constructed before one or both of the required=false Beans gets constructed, thus ensuring that one or both might always be null, regardless of whether it may manage to initialize correctly.
It kind of feels like I'm going against the grain at this point, so I'm looking for advice on the "right" way to do this sort of thing, where a whole set of beans might get switched out based on the availability of some outside service or environment.
Profiles don't look like the right answer, because I won't know until after my Service beans try to initialize which implementation I want to choose; I certainly won't know it at the time I create the context.
#Order doesn't achieve the goal either. Nor does #Conditional and testing on the existence of the bean (because it still might not be constructed yet). Same problem with FactoryBean- it does no good to check for the existence of beans that might not have been constructed at the time the FactoryBean is asked to create an instance.
What I really need to do is create a Bean based on the availability of other beans, but only AFTER those beans have at least had a chance to try to initialize.
Spring Profiles is your friend. You can set the current profile by way of environmental variable, command-line argument, and other methods. You can annotate a Spring-managed component so that it's created for a certain profile.
Spring Profiles from the Spring Documentation
Well in this case it turned out to be a tangential mistake that influenced the whole wrong behavior.
To give some background, my first, naive (but workable) approach looked like this:
#Autowired(required=false)
#Qualifier(RedisConfig.HISTORY)
private RLocalCachedMap<String, History> redisHistoryMap;
#Autowired(required=false)
#Qualifier(HazelcastConfig.HISTORY)
private IMap<String, History> hazelcastHistoryMap;
// RequestHistory is an interface
#Bean
public RequestHistory requestHistory() {
if (redisHistoryMap != null) {
return new RedisClusteredHistory(redisHistoryMap);
} else if (hazelcastHistoryMap != null) {
return new HazelcastClusteredHistory(hazelcastHistoryMap);
} else {
return new LocalRequestHistory(); // dumb hashmap
}
}
In other #Configuration classes, if the beans that get #Autowired here aren't available (due to missing configuration, exceptions during initialization, etc), the #Bean methods that create them return null.
The observed behavior was that this #Bean method was getting called after the RLocalCachedMap<> #Bean method got called, but before Spring attempted to create the IMap<> by calling its #Bean method. I had incorrectly thought that this had something to do with required=false but in fact that had nothing to do with it.
What actually happened was I accidentally used the same constant for both #Bean names (and consequently #Qualifiers), so presumably Spring couldn't tell the difference when it was calculating its dependency graph for this #Configuration class... because the two #Autowired beans appeared to be the same thing (because they had the same name).
(There's a secondary reason for using #Qualifier in this case, which I won't go into here, but suffice it to say it's possible to have many maps of the same type.)
Once I qualified the names better, the code did exactly what I wanted it to, albeit in a way that's somewhat inelegant/ugly.
At some point I'll go back and see if it looks more elegant / less ugly and works just as well to use #Conditional and #Primary instead of the if/else foulness.
The lesson here is that if you explicitly name beans, make absolutely sure your names are unique across your application, even if you plan to swap things around like this.

Neo4j/SDN warining: No identity field found for class of type for exception class

In my Neo4j/Spring Data Neo4j project I have a following exception class:
public class CriterionNotFoundException extends NotFoundDomainException {
private static final long serialVersionUID = -2226285877530156902L;
public CriterionNotFoundException(String message) {
super(message);
}
}
During application startup I see a following WARN:
WARN o.s.d.n.m.Neo4jPersistentProperty - No identity field found for class of type: com.example.domain.dao.decision.exception.DecisionAlreadyExistsException when creating persistent property for field: null
Why Neo4j/SDN is looking for identity field in this class ? How to correctly configure my application in order to skip this warning ?
You can ignore this warning- this is produced by SDN when building metadata Spring Data REST integration. It should not be doing this for Exceptions of course, and we'll have this fixed.
One way "to correctly configure [your] application" would be add EnableNeo4jRepositories and EntityScan annotations to your SpringBootApplication (or your config bean) as mentioned here and specify the names of your packages with Neo4J relevant classes.
I've debugged the SDN/Neo4j code only for 5 minutes, so my guesses may be off, but, I believe those warnings are generated when you don't specify packages to scan for your entities, and repositories. I'm guessing in that case SpringBoot+Neo4J-mapping scans each and every class in your project, and if a class has some fields, but nothing resembling an "id" field, it spits this warning. (So adding a Long id field to the classes with warnings may be another (yes, very ugly) work-around as well)
I've seen those warnings vanished when I tried explicitly specifying package names in my project using SpringBoot 2.0.6 + spring-data-neo4j 5.0.11.

Spring Data JPA save() throws NPE

I wrote a web service with spring boot using spring data jpa for persistence.
The webservice has some static objects (in Singleton Bean) that regulary needs to be backed up to my database.
Sometimes! (This sucks...I dont' really know what happens) when I call
ObjectType updated = myRepository.save(existingObject)
I get an java.lang.NullPointerException - without usable stacktrace as the method doing this is running via #Scheduled.
I tried debugging and existingObject seems to be absolutely fine. The error only occurs, when existingObject is actually NOT a new object (i.e. when id != 0)
P.S. I am using Spring Boot therefore not really using EntityManager. I only use the #Autowired myRepository.
I'm seeing something similar happening. During save, it seems the object is re-fetched from DB (perhaps to see which fields were altered?) but a ManyToOne relationship is not loaded (even though the FetchType is explicitly set to EAGER).
For some reason, a compareTo is called on the relationship. The related object isn't null, but it only has its ID filled in (presumably because that was available in the object that was fetched from the DB). All other fields are null.
When the compareTo then does its stuff, a NullPointerException follows.
As to the actual solution, I don't know yet, as I would have expected the FetchType EAGER to make sure the relationship is loaded. Hopefully this helps someone to further find the root cause.
(I would have added this as a comment as it doesn't actually answer the question, but StackOverflow won't let me due to insufficient reputation...)
You haven't provided enough information. IF that line is where the NullPointerException is occurring, then the only possibilities are that myRepository is null, or existingObject is null. However, it's possible the NullPointerException is happening as a result of something in the save. Wrap the code in a try catch, and log the exception stacktrace to file.
If needed, checkout the logging customization notes here:
http://projects.spring.io/spring-boot/docs/spring-boot/README.html

Resources