Integration Testing with Spring Boot, RabbitMQ and UUID as primary key - spring

I have a Spring Boot application written in Kotlin with RabbitMQ and PostgreSQL. Primary keys in the DataBase are of type UUID and are generated by the application with a distinct service.
For integration testing this service is stubbed to generate UUIDs sequentionally instead of randomly, because they are checked against the expected data set. There is a simple incremental counter in the service.
This stub service has Prototype scope to every repository have its own UUID generator.
This approach had worked perfectly until I had to test cases with RabbitMQ.
The flow is following:
A request to controller is performed.
A record is saved in operations history table.
A message is sent to the RabbitMQ topic.
Then message is returned back to the application.
A new record is saved in operations history table.
When this test is run alone it passes and everything works as expected.
When this test is run together with the other tests it fails. An error occurs on step 5 when it tries to save new record to the data base.
Via debug I have discovered the following:
In the first case when the test is run alone the repository instance and the UUID generator service instance are the same on both steps 2 and 5.
In the second case when test is run together with the other, the repository instance and the UUID generator service instance are different on steps 2 and 5. As result there are different counters that are not syncronized. It leads to generation of the same id and a primary key exception on step 5.
As I understand this issue happens because Spring RabbitMQ creates listener containers that may have its own bean instances. It is still a riddle to me how it chooses to create a new bean instance or to use the existing.
Can you propose any solution or a different approach to implement this test?

Related

Spring boot autowire and cross feature shared data

As per the serenity BDD each scenario is solid
Serenity session variable will be lost for the second scenario
I am integrating SpringBoot using #SpringBootTest
My code is perfectly working whatever I wrote in the Background section of the feature file to call an URL, I wrote a piece of code to not to call the same service URL for the second time, rather it will take the data which is stored in a HashMap from an object which autowired using #Autowired
My Question is, is this against the BDD that one data gets maintained across more than one feature file or scenario
Session data is reset for each scenario (by intent - scenarios are meant to be independent of each other). If you need to share data across multiple scenarios, you will need to implement this yourself.

Managing database content on jhipster server

How would one go about managing the data in their PostgreSQL database on their JHipster generated server? My goal is to be able to periodically check the items in the database and perform certain tasks based on the database contents.
I'm new to using JHipster and I'm not sure how I'd go about adding or removing entities as well as adding items to entities on the server. I understand that services facilitate doing these operations for the client-side, but I can't see how I would use the same approach to do what I need on the server (if this is even the correct approach).
To schedule a task on backend you can annotate a public method of a service with #Scheduled, you can find an example in the code generated by JHipster in UserService class look at the removeNotActivatedUsers() method.

Spring Batch with unknown datasource

I have a working Spring Boot application which embeds a Spring Batch Job. The job is not run on a schedule, instead we kick it with an endpoint. It is working as it should. The basics of the batch are
Kick the endpoint to start the job
Reader reads from input file
Processor reads from oracle database using jpa repository and simple spring datasource config
Writer writes to output file
However there are new requirements:
The schema of the repository database is from here on unknown on application startup. The tables are the same, it is just an unknown schema. This fact is out of our control and you might think it is stupid but there are reasons for it and this cant be changed. This means that with current functionality we need to reconfigure the datasource when we know the new schema name, and restart the application. This is a job that we will run for a number of times when migrating from one system to another, so it has a limited lifecycle and we just need a "quick fix" to be able to use it without rewriting the whole app. So what I would like to do is:
Send the schema name as a query param to the application, put it in job parameters and then - get a new datasource when the processor reads from the repository. Would this be doable at all using Spring Batch? Any help appreciated!

Starting embedded servers before context loads in Spring Boot for testing

I am working on a sample application right now using Spring Boot, Spring Data JPA, and Spring Data Elasticsearch. I want to be able to run the unit tests as part of a pipeline build, but they require Elasticsearch to be running to work as the service makes calls to said ES server. SQL works fine because I am using an in-memory H2 instance.
I have implemented some code to attempt to launch ES as an "embedded" server. The embedded server works just fine, but it seems like, at least from what I can tell, it is started AFTER the context loads. Most importantly after the ElasticSearchConfiguration does it's thing.
I think I need refactor the code out of AbstractElasticsearchTest into a separate class that can run prior to ElasticSearchConfiguration generates the client/template, but I am not sure how to do it, nor how to Google said process.
Is there some mechanism in Spring Boot that could be used to start the embedded servers prior to running any of the configurations? Or is there some way I could enhance ElasticSearchConfiguration to do it prior to creating the client/template, but only when running the unit tests?
Edit:
So, just to be a little more specific...what I am looking for is a means/way to either run ES 5 in "embedded" mode OR how to mock up the Spring Data ES code enough so that it works for the CI server. The code linked above currently is mixing unit tests with integration tests, I know, as it's currently making calls to a physical ES server. That's what I am trying to correct: I should be able to stub/mock enough of the underlying Spring Data code to make the unit test think it's talking to the real-deal. I can then change the tests that determine if the documents made it to ES and test things like type-ahead searches to be integration tests instead so they do not run when CI or Sonar runs.
Ok, so for those that might come back here in the future, this commit shows the changes I made to get ES to run as "embedded".
The nuts-and-bolts of it was to start the node as "local" then physically return node.client(). Then in the Spring Bean method that gets the client, check if you have "embedded" turned on, if so start the node and return it's Client (the local one), if not just build the client just as normal.

Can I duplicate a web service for testing?

I have a REST web service exposed at http://server:8080/my-production-ws by JBoss (7). My spring (3.2) configuration has a datasource which points to a my-production-db database. So far so good.
I want to test that web service from the client side, including PUT/POST operations but I obviously don't want my tests to affect the production database.
Is there an easy way to have spring auto-magically create another web service entry point at http://server:8080/my-test-ws or maybe http://server:8080/my-production-ws/test that will have the exact same semantics as the production web service but will use a my-test-db database as a data source instead of my-production-db?
If this is not possible, what is the standard approach to integration testing in that situation?
I'd rather not duplicate every single method in my controllers.
Check the spring Profiles functionality, this should solve the problem. With it its possible to create two datasources with the same bean name in different profiles and then activate only one depending on a parameter passed to the the JVM.

Resources