I have a problem with integration tests.
We're using spring boot 1.4.4 + spring batch + testcontainers + postgres.
Each integration test annotated:
#RunWith(SpringRunner.class)
#ActiveProfiles(value = { "integrationtest" })
#SpringBootTest(classes = ServiceApplication.class)
The problem is:
Looks like each integration test that contains #MockBean annotation creates a new context.
Each new context creates a new connection pool with a 10 connections.
But previous context doesn't release its connections.
So before each such kind of test I can select connections count from postgres:
select sum(numbackends) from pg_stat_database;
And see that each test adds 10 new connections.
10th test fails because of 100 connections limit.
Could you advice how to solve it?
Looks like a combination of annotations + all #MockBean classes calculates some kind of "hash".
Each "hash" creates its own context and spring caches it.
Each contains creates its own connection pool with 10 connections by default.
While all contexts are cached all connection pools hold connections - that's is reason my problem.
As jusermar10 said you can fix it using #DirtiesContext annotation.
Related
I intend to write integration tests to check whether my listener/handlers work properly.
I am using Spring Boot 2.1.9.RELEASE with the according amqp dependency.
I have written a custom sender (publisher) and listener (receiver). When conducting the publishing test I am able to debug into my publisher but I am not able to debug into my receiver which I tried to annotate with #RabbitListener(queues = "myQueue") on method level and with #RabbitListener(queues = "myQueue") on class level in conjunction with #RabbitHandler on method level.
What do I have to prepare to run such integration tests successfully?
I have already googled a lot finding some suggestions using RabbitListenerTestHarness and so on but this never worked. For RabbitListenerTestHarness I always get the error that no such bean could be found.
My Spring Boot application also starts a gRPC service along with its REST (HTTP) service. I've written specific tests for gRPC and REST. When I run a gradle test these tests are run sequentially, however; there is no reason they can't be run in parallel.
What I'm shooting for here is a single instance of my Spring Boot application running while the tests are executed in parallel.
I've tried setting the test section in my gradle file so it has 'forkCount', I also tried setting options such that parallel="classes", but this produces an error about the 'parallel' being an unknown property (maybe a junit 5 thing?)
test {
options {
parallel = "classes"
// forkCount = 2
}
}
The forkCount option is not what I'm looking for since it will start multiple instances of the spring application.
I've also tried removing the #RunWith from the test classes and making a separate test class (which has the #RuWith annotation) that has the following method in it
#Test
void testRunner() {
JUnitCore.runClasses(ParallelComputer.classes(), {GrpcTests.class, RestTests.class});
}
But the tests still appear to run sequentially.
I've tried several other things as well, sorry I don't have all of them handy.
Goal
Ideally what I'm hoping for is a single instance of my Spring Boot app running while the test classes run in parallel (bonus kudos if I can get the methods to run in parallel too)
Java Version: "1.8.0_171"
Spring Boot Version: 2.0.4.RELEASE
Per the recommendation I tried adding the
#Test
public void contextLoads() throws Exception {
}
And adding the 'maxParallelForks' entry in the gradle file, I had already been using the #SpringBootTest annotation but this behaved the same as when I used 'forkCount` in that at least 2 instances where started as can be seen by the test shutdown log
2019-04-25 10:24:17.245 LogLevel=INFO 53838 --- shutting down gRPC server since JVM is shutting down
...
2019-04-25 10:24:30.125 LogLevel=INFO 53839 --- shutting down gRPC server since JVM is shutting down
You can see I get two shutdown messages and the PIDs are shown (53838 & 53839).
You need to combine #SpringBootTest with maxParallelForks.
Annotate your unit tests with #SpringBootTest. #SpringBootTest will boot up a Spring Boot context that will be cached across all your tests.
"A nice feature of the Spring Test support is that the application context is cached in between tests, so if you have multiple methods in a test case, or multiple test cases with the same configuration, they only incur the cost of starting the application once"
See:
https://spring.io/guides/gs/testing-web/
Add the following to your build.gradle. To run multiple test at the same time.
tasks.withType(Test) {
maxParallelForks = 4 //your choice here
}
See https://guides.gradle.org/performance/#parallel_test_execution
I'm using CachingConnectionFactory to create JmsTemplate in a similar way:
#Bean
public JmsTemplate replyJmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate();
jmsTemplate.setConnectionFactory(...<CachingConnectionFactory>...);
jmsTemplate.setDeliveryPersistent(...);
jmsTemplate.setExcplicitQosEnabled();
jmsTemplate.setTimeToLive(...);
}
than I call receiveSelected in my test classes:
replyJmsTemplate.receiveSelected(...);
I'm doing it in each of my test class. When I get expected message, my test is finished and I want current egress flow (to queue) to be closed. But it isn't. I execute all my test classes sequentialy (about 70 classes) so I have 70 egress flows active. All egress flows are released after all tests are finished. As a result sometimes I reach max egress flow which is set to 100 on my server. My question is how to close the egress flow after each test is executed? I guess I can manually close session in test but I cannot retrieve session from jmsTemplate.
Since you are mentioning JmsTemplate as a #Bean, I can guess that you use Spring Testing Framework in your JUnit tests. For that purpose you should consider to close ApplicaitonContext after each test class. The #DirtiesContext comes to the help:
* Test annotation which indicates that the
* {#link org.springframework.context.ApplicationContext ApplicationContext}
* associated with a test is <em>dirty</em> and should therefore be closed
* and removed from the context cache.
https://docs.spring.io/spring/docs/5.0.7.RELEASE/spring-framework-reference/testing.html#dirtiescontext
If each class has its own Spring #Configuration, add #DirtiesContext to each test class - it will close the test application context, destroying the connection factory.
Finally I just used SingleConnectionFactory so that each session is killed after each test case is finished.
Is it in a way possible to, say in memory, start a broker that can be used to execute automated test cases using Spring Integration MQTT?
I've tried achieving this with ActiveMQ (following https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-messaging.html) but somehow didn't succeed, maybe anyone has a short working example?
It's not Spring Integration (Spring Boot) responsibility to provide some embedded broker for such a protocol. If there is one, we could consider to implement an auto-configuration on the matter , similar to what we do for embedded RDBMS, JMS and MongoDB. You really need to consult ActiveMQ documentation.
Looks like we can do it like this in the test class:
private static BrokerService activeMQBroker;
...
#BeforeClass
public static void setup() throws Exception {
activeMQBroker = new BrokerService();
activeMQBroker.addConnector("mqtt://localhost:1883");
activeMQBroker.setPersistent(false);
activeMQBroker.setUseJmx(false);
activeMQBroker.start();
}
I didn't try it, but this is exactly what I do to test against STOMP.
I'm working on a Spring Boot application with Liquibase integration to setup the database. We use a different user for the database changes which we configured using the application.properties file
liquibase.user=abc
liquibase.password=xyz
liquibase.url=jdbc:postgresql://something.eu-west-1.rds.amazonaws.com:5432/app?ApplicationName=${appName}-liquibase
liquibase.enabled=true
liquibase.contexts=dev,postgres
We have at this moment 3 different microservices in deployment and we noticed that for every running instance, Liquibase opens 10 connections and it never closes these connections unless we stop the application. This basically means that in development we regularly hit the connection limit of our Amazon RDS instance.
Right now, in development, 40 of 74 active connections are occupied by Liquibase. If we ever want to go to production with this, having autoscaling enabled for all the microservices, that would mean we'll have to over-scale the database in order not to hit any connection limits.
Is there a way to
tell liquibase to not use a connection pool of 10 connections
tell liquibase to stop or close the connections
So far I found no documentation on how to do this.
Thanks to the response of Slava I managed to fix the problem with following datasource configuration class
#Configuration
public class LiquibaseDataSourceConfiguration {
private static final Logger LOG = LoggerFactory.getLogger(LiquibaseDataSourceConfiguration.class);
#Autowired
private LiquibaseDataSourceProperties liquibaseDataSourceProperties;
#LiquibaseDataSource
#Bean
public DataSource liquibaseDataSource() {
DataSource ds = DataSourceBuilder.create()
.username(liquibaseDataSourceProperties.getUser())
.password(liquibaseDataSourceProperties.getPassword())
.url(liquibaseDataSourceProperties.getUrl())
.driverClassName(liquibaseDataSourceProperties.getDriver())
.build();
if (ds instanceof org.apache.tomcat.jdbc.pool.DataSource) {
((org.apache.tomcat.jdbc.pool.DataSource) ds).setInitialSize(1);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxActive(2);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxAge(1000);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinIdle(0);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinEvictableIdleTimeMillis(60000);
} else {
// warnings or exceptions, whatever you prefer
}
LOG.info("Initialized a datasource for {}", liquibaseDataSourceProperties.getUrl());
return ds;
}
}
The documentation of the properties can be found on the site of Tomcat: https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
initialSize: The initial number of connections that are created when the pool is started
maxActive: The maximum number of active connections that can be allocated from this pool at the same time
minIdle: The minimum number of established connections that should be kept in the pool at all times
maxAge: Time in milliseconds to keep this connection. When a connection is returned to the pool, the pool will check to see if the now - time-when-connected > maxAge has been reached, and if so, it closes the connection rather than returning it to the pool. The default value is 0, which implies that connections will be left open and no age check will be done upon returning the connection to the pool.
minEvictableIdleTimeMillis: The minimum amount of time an object may sit idle in the pool before it is eligible for eviction.
So it does not appear to be a connection leak, it's just the default configuration of the datasource which is not optimal for Liquibase if you use a dedicated datasource. I don't expect this to be a problem if the liquibase datasource is your primary datasource.
Update: This has been fixed in 2.5.0-M2 and Liquibase now uses a SimpleDriverDataSource without a connection pool.
Original answer: This change to connection pool management was introduced in Spring Boot version 2.0.6.RELEASE, and only takes effect if you use Spring Boot Actuator. There is an actuator endpoint (enabled by default) which allows you to get change sets applied by Liquibase. For this to work Liquibase keeps its database connections open. You can disable the endpoint with management.endpoint.liquibase.enabled = false, in which case the connection pool used by Liquibase will be shutdown after the initial run.
GitHub issue related to this change: https://github.com/spring-projects/spring-boot/issues/13832
Spring Boot Actuator (see 12. Liquibase: https://docs.spring.io/spring-boot/docs/2.0.6.RELEASE/actuator-api/html/
I don't know why liquibase doesn't close a connection, maybe it's a bug and you should create an issue for that.
To set connection pool for liquibase you have to create a custom data source and mark it with #LiquibaseDataSource annotation.
Related issues provide more details:
Possibility to specify custom dataSource configuration for liquibase only
Add LiquibaseDataSource annotation