Testing a Spring Boot Elastic Search application and loading context without starting ES-Instance - spring

Since I updated to Spring boot 2.5 my application context won't start in the Test environment.
We have several test environments. Most tests do not need an Elastic search instance. Those that need it share an Elasticsearch test container instance.
Since the Update the creation of repositories causes some kind of query to Elasticsearch. That fails and causes the context not to load.
Is there a way to mock away the Spring Data Elasticsearch part(Not loading is not really an option to load most parts of the context)?
Should I be starting an Elasticsearch Instance for all integration tests(that seems like a little overkill, since few tests actually need it)?
Any ideas are highly appreciated.

Related

Automate testing of caching functionality in a Spring Boot application

I am wondering about how can we testing automate functionality.
I am working on a Spring Boot micro-service where we use a GemFire cache. Right now I am testing it manually for below scenarios:
Is the data purged correctly after TTL is reached
Retrieving the data from cache if object exists
So, I know we can have a separate service which calls the GemFire and making sure that the object exists in cache (for step2). But not really sure how can we automate testing for step1.
And the whole point I am wondering is do we really need a new service completely to test this as a overhead? Are there any tools / better approach for testing the functionality?
Since you're using spring-boot and VMware GemFire together, I really hope you're taking advantage of the huge help and functionality spring-boot-data-gemfire provides out of the box. If you are, then you'd be delighted to know that there's yet another project, spring-test-data-geode, which can be used to write Unit and Integration Tests when building Spring Data for Apache Geode & VMware GemFire applications, you should really give it a try as it greatly helps in managing the scope and lifecycle of mock VMware GemFire/Apache Geode objects, along with cleaning all resources used by real objects used during Integration Tests.
As a side note, if you're using the Data Expiration Functionality shipped out of the box with VMware GemFire, I really don't see an actual need (other than the peace of mind that comes with I've tested everything I could) to include custom tests within your testing suite, you should only test what you own. The functionality itself is thoroughly tested already as part of the VMware GemFire / Apache Geode project itself, and you can see some (certainly not all) examples of such tests in the following links: ExpirationDUnitTest, RegionExpirationDistributedTest, ReplicateEntryIdleExpirationDistributedTest.
Cheers.
I have had some success using TestContainers here is the code used to create the container and
a sample test. It works by executing gfsh commands on the container but is slow.

Starting embedded servers before context loads in Spring Boot for testing

I am working on a sample application right now using Spring Boot, Spring Data JPA, and Spring Data Elasticsearch. I want to be able to run the unit tests as part of a pipeline build, but they require Elasticsearch to be running to work as the service makes calls to said ES server. SQL works fine because I am using an in-memory H2 instance.
I have implemented some code to attempt to launch ES as an "embedded" server. The embedded server works just fine, but it seems like, at least from what I can tell, it is started AFTER the context loads. Most importantly after the ElasticSearchConfiguration does it's thing.
I think I need refactor the code out of AbstractElasticsearchTest into a separate class that can run prior to ElasticSearchConfiguration generates the client/template, but I am not sure how to do it, nor how to Google said process.
Is there some mechanism in Spring Boot that could be used to start the embedded servers prior to running any of the configurations? Or is there some way I could enhance ElasticSearchConfiguration to do it prior to creating the client/template, but only when running the unit tests?
Edit:
So, just to be a little more specific...what I am looking for is a means/way to either run ES 5 in "embedded" mode OR how to mock up the Spring Data ES code enough so that it works for the CI server. The code linked above currently is mixing unit tests with integration tests, I know, as it's currently making calls to a physical ES server. That's what I am trying to correct: I should be able to stub/mock enough of the underlying Spring Data code to make the unit test think it's talking to the real-deal. I can then change the tests that determine if the documents made it to ES and test things like type-ahead searches to be integration tests instead so they do not run when CI or Sonar runs.
Ok, so for those that might come back here in the future, this commit shows the changes I made to get ES to run as "embedded".
The nuts-and-bolts of it was to start the node as "local" then physically return node.client(). Then in the Spring Bean method that gets the client, check if you have "embedded" turned on, if so start the node and return it's Client (the local one), if not just build the client just as normal.

Spring Boot Integration Test failes due to lucene lock when using Hibernate Search

In my Spring Boot 1.5.10.Final project I use Hibernate Search ORM 5.6.4.Final. It works fine except of the integration tests. There is one test class with several
test methods to test the search logic. If I run just this test class every things works fine. Spring Boot is starting and creates the index. If I run this test class
together with all other integration tests, every test class will throw an LockObtainFailedException and the Hibernate Search tests will fail.
org.apache.lucene.store.LockObtainFailedException: Lock held by this virtual machine: ...LieferantEntity\write.lock
at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:127) ~[lucene-core-5.5.5.jar:5.5.5 b3441673c21c83762035dc21d3827ad16aa17b68 - sarowe - 2017-10-20 08:57:09]
at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) ~[lucene-core-5.5.5.jar:5.5.5 b3441673c21c83762035dc21d3827ad16aa17b68 - sarowe - 2017-10-20 08:57:09]
at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) ~[lucene-core-5.5.5.jar:5.5.5 b3441673c21c83762035dc21d3827ad16aa17b68 - sarowe - 2017-10-20 08:57:09]
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:776) ~[lucene-core-5.5.5.jar:5.5.5 b3441673c21c83762035dc21d3827ad16aa17b68 - sarowe - 2017-10-20 08:57:09]
at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126) ~[hibernate-search-engine-5.6.4.Final.jar:5.6.4.Final]
at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92) ~[hibernate-search-engine-5.6.4.Final.jar:5.6.4.Final]
at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:117) ~[hibernate-search-engine-5.6.4.Final.jar:5.6.4.Final]
at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203) ~[hibernate-search-engine-5.6.4.Final.jar:5.6.4.Final]
at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81) [hibernate-search-engine-5.6.4.Final.jar:5.6.4.Final]
at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.6.4.Final.jar:5.6.4.Final]
at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:165) [hibernate-search-engine-5.6.4.Final.jar:5.6.4.Final]
at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:151) [hibernate-search-engine-5.6.4.Final.jar:5.6.4.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
I use the default settings. If I set exclusive_index_use to false, then it works without failure but then the test execution is very slow.
For me it seems the index is initialized during the startup of Spring Boot and interferes between the tests.
Is it possible to use Spring Boot integration tests with Hibernate Search in a way so that locks are cleanly released between tests?
Alternativly I'm looking for a way to disable the Hibernate Search indexing for all integration tests not making use of Hibernate Search
I also tried already the property near-real-time and different lock factories as native, simple and single without luck.
First: do not use exclusive_index_use unless you are a Lucene guru. It is dangerous and probably will not behave as you want.
Now that we got that out of the way... As far as I understand, you are trying to execute integration tests in parallel on the same machine. This means integration tests will probably compete for access to the exact same index, and will write to the same index. This could lead to unpredictable results if your tests perform conflicting writes (one test erasing a document added by another test, before that test has completed).
If you really need to perform tests in parallel, I would recommend to execute each test in an isolated environment:
Dedicated DB, or at least dedicated DB schema
Dedicated Lucene index
etc.
In the case of Hibernate Search, you will have to find a way to use a different physical index in each test execution.
There are two ways to do that:
Just for tests, do not store the indexes on the filesystem, but directly in the heap, by setting hibernate.search.backend.directory.type to local-heap (Hibernate Search 6+) or hibernate.search.default.directory_provider to local-heap (Hibernate Search 5 and below).
It's super easy to implement, but there are a few disadvantages you should be aware of:
your testing environment will not be exactly the same as your production environment anymore
indexes will be lost after the tests have finished executing, which may make post-mortem debugging challenging (you won't be able to use Luke to inspect the state of the indexes anymore)
if your integration tests store a lot of content in the index, you might get an OutOfMemoryError.
If the disadvantages of solution 1 are too much for you, you can continue using the filesystem to store the indexes, but use a different configuration for each test execution, setting the index base path (hibernate.search.backend.directory.root for Hibernate Search 6+, or hibernate.search.default.indexBase for Hibernate Search 5 and below) to some unique path for each test execution. You will have to find how to do that in Spring, but I would be suprised to learn it's not possible. Maybe Spring allows you to use interpolation in the properties, something like hibernate.search.backend.directory.root = /tmp/it/#{testName}?
See the documentation about directory configuration (here for Hibernate Search 6+, or here for Hibernate Search 5 for more information on how to configure index storage.

In-memory elastic search

I have a scenario where I want to query database once and after that want to cache the whole data in memory.
I got the suggestion for in-memory elastic search, I have googled it understand what it is and how can I implement it in my spring boot application, but I didn't find any appropriate solution.
Any suggestion on this like how can I implement this in my spring boot app and what would be the approach.
There used to be an in-memory storage type in Elasticsearch in 1.x, but it has been removed in 2.x and later versions. If your working set is small enough it might be mapped to memory in full, but you cannot really control that other than having enough memory.
If you want to run an embedded / in-process Elasticsearch with your Spring Boot application that feature was removed in 5.x and this blog post explains why.

Can you get Spring Boot JUnit tests to use the same server?

I have some Spring Boot JUnit tests that require a somewhat lengthy server start up (I'm loading a complex domain in JPA). I've put them into a test suite, but each test kicks off a new server start up.
Is it possible to set them up in such a way that the server is only started once and each test is loaded onto it and run as if the server were started by the test itself?
Okay, so the solution here is actually built in to Spring testing. That is, it caches ApplicationContexts for tests, as described here, as long as the various things like properties are the same.
Ironically, I screwed this up by trying to speed up the tests by using test properties to limit what was loaded.

Resources