For test suite, replacing MYSQL by in memory HSQLDB is not working - spring

We have an application where we use Struts, Spring, and hibernate.
Previously, we were using mysql databse for running test suites using testng framework.
Now we want to use “in memory” database of HSQLDB.
We have made all the required code changes to use HSQLDB in “in memory” mode.
For ex.
Datasource url = jdbc:hsql:mem:TEST_DB
Username = sa
Password =
Driver = org.hsqldb.jdbcDriver
Hibernate dialect= org.hibernate.dialect.HSQLDialect
Hibernate.hbm2ddl.aoto = create
#Autowired
private DriverManagerDataSource dataSource;
private static Connection dbConnection;
private static IDatabaseConnection dbUnitConnection;
private static IDataSet dataSet;
private MockeryHelper mockeryHelper;
public void startUp() throws Exception {
mockeryHelper = new MockeryHelper();
if (dbConnection == null) {
dbConnection = dataSource.getConnection();
dbUnitConnection = new DatabaseConnection(dbConnection);
dbUnitConnection.getConfig().setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new HsqldbDataTypeFactory());
dataSet = new XmlDataSet(new FileInputStream("src/test/resources/test-data.xml"));
}
DatabaseOperation.CLEAN_INSERT.execute(dbUnitConnection, dataSet);
}
We have done required code changes to our base class where we do startup and teardown of database before and after each test.
We use test-data.xml file from where we insert test data to created database using testng framework. Now my questions are
1.when I run test case, database gets created and data is also inserted correctly. However, my respective daos return empty object list when I try to retrieve them from interceptors of struts.
2.We use HSQLDB version 1.8.0.10. Same configurations are made for other project. In that project, most of the test cases are running with success, however for some of them sorting order of data is incorrect.
We discovered that HSQLDB is case sensitive for sorting. And there is one property sql.ignore_case, when set to true, sorting becomes case insensitive. But this is not working for us.
Can someone please help in this?
Thanks in adavance.

I'm afraid sql.ignore_case is not available in your HSQLDB version, as it's not even in the last stable (2.2.9), contrary to what the docs say. However latest snapshots, as stated in this thread, do include it. I'm not using 1.8 my self, but executing SET IGNORECASE TRUE before any table creation may work for you, it does in 2.2.9. If you really need 1.8, a third option may be to pick the relevant code from latest source, add it to 1.8 source and recompile, no idea how hard/easy this could be.

Related

How to inspect database when running with spring-db-unit

We run our tests with spring-db-unit, the forked and maintained one.
The tests run just fine and can access and modify the data. But when I was debugging some tests I realized that the database stays empty all the time! My assumption is that the runner creates a transaction and rolls it back at the end of the test instead of commiting.
How can I inspect the data or persist it in the database at the end (or anytime, really!) of a test?
The setup:
Our config is follows the suggested setup from the gibthub readme:
#TestExecutionListeners(
inheritListeners = false,
value = {
DependencyInjectionTestExecutionListener.class,
DirtiesContextTestExecutionListener.class,
TransactionalTestExecutionListener.class,
DbUnitTestExecutionListener.class,
WithSecurityContextTestExecutionListener.class
})
#DatabaseSetup(value = "/testdata/dbunit/base_test_dataset.xml", type = DatabaseOperation.CLEAN_INSERT)
#SpringApplicationConfiguration(classes = {
DBUnitTestConfig.class,
})
#DbUnitConfiguration(databaseConnection = "dbUnitConnection")
public class ReviewServiceTest {
#Test
#WithUserDetails(TEST_COCKPIT_1)
public void getUserVersionReviews() throws Exception {
// when in a debug point here, the DB is empty.
}
}
The database is PostgreSQL 11.4. The DBUnitTestConfig class above just configures the PostgresqlDataTypeFactory of the DatabaseConfigBean
Switching to TransactionDbUnitTestExecutionListener instead of TransactionalTestExecutionListener and DbUnitTestExecutionListener didn't help.
I tried it with DatabaseOperation.INSERT instead of the default one and moved the #DatabaseSetupannotation onto test methods and setUp methods, without success.
We are using the following dependencies. It's not latest and the furthest I was able upgrade for now :
com.github.ppodgorsek:spring-test-dbunit-core:5.2.0
org.dbunit:dbunit:2.6.0
junit:junit:4.12
org.springframework.boot:spring-boot-starter-parent:1.3.8.RELEASE
OpenJDK 1.8
What confuses me that I couldn't find anyone complaining or configuration options for it, so I must be missing something obivious!
Rolling back transactions isn't spring-test-dbunit fault. It works with and without transactions. The roll back is the default behavior of the TransactionalTestExecutionListener.
Citing the javadoc of the class:
By default, test transactions
will be automatically rolled back after completion of the
test; however, transactional commit and rollback behavior can be
configured declaratively via the #Rollback annotation
at the class level and at the method level.
The solution is to put #org.springframework.test.annotation.Rollback(false) on the test class or method.

spring boot application: jpa query returning old data

We have created a spring boot project using 1.3.5 version. Our application interacts with Mysql database.
We have created a set of jpa-repositories in which we are using findAll, findOne and other custom queries methods.
We are facing a issue which is occurring randomly . Following are the steps to reproduce it:
Fire a read query on db using spring-boot application.
Now Manually change the data in Mysql using mysql-console of the records which were returned by above read query.
Again fire the same read query using application.
After step 3 , we should have received the modified results of step 2, but what we got was the data before modification.
Now if we again fire the read query using application, It gives us correct values.
This issue occurs randomly. We are not using any kind of cache in our application.
While debugging I found out that jpa-repository code is infact calling mysql and it also fetches the latest result ,but when this call return back to our application service , surprisingly the return value has the old data.
Please help us identify the possible cause of it.
JPA/Datasource config:
spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/dbname?autoReconnect=true
spring.datasource.username=root
spring.datasource.password=xxx
spring.jpa.database-platform=org.hibernate.dialect.MySQL5Dialect
spring.datasource.max-wait=15000
spring.datasource.max-active=100
spring.datasource.max-idle=20
spring.datasource.test-on-borrow=true
spring.datasource.remove-abandoned=true
spring.datasource.remove-abandoned-timeout=300
spring.datasource.default-auto-commit=false
spring.datasource.validation-query=SELECT 1
spring.datasource.validation-interval=30000
hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
hibernate.show_sql=false
hibernate.hbm2ddl.auto=update
Service Method:
#Override
#Transactional
public List<Event> getAllEvent() {
return eventRepository.findAll();
}
JPARepository:
public interface EventRepository extends JpaRepository<Event, Long> {
List<Event> findAll();
}
#Cacheable(false)
example:
#Entity
#Table(name="table_name")
#Cacheable(false)
public class EntityName {
// ...
}
This might be because of some "DIRTY READS". Faced a similar issue, Try using Transactional Locks especially "Repeatable reads" which could probably avoid this problem. Correct me if I'm wrong.
you can use
entityManager.refresh(entity)
to get latest values of entity
You can use:
#Autowired
private EntityManager entityManager;
then before querying the same entity another time:
entityManager.clear();
then call the query

Hibernate does not create table?

I am working on a spring + hibernate based project. Actually, A project is given to me with Simple Spring Web Maven Structure (Spring Tool Suit as IDE).
I have successfully imported the project into my STS IDE and have also changed some of hibernate configuration properties so that application will be able to talk to my local PostGreSQL server.
The changes that I have made are as given below:
jdbc.driverClassName=org.postgresql.Driver
jdbc.dialect=org.hibernate.dialect.PostgreSQLDialect
jdbc.databaseurl=jdbc:postgresql://localhost:5432/schema
jdbc.username=username
jdbc.password=password
The hibernate.hbm2ddl.auto property is already set to update so I didn't change that.
Then I simply deploy my project to Pivotal Server and hibernate get executed and creates around 36 tables inside my DB schema. Looks fine !
My Problem: In my hibernate.cfg.XML file total 100 Java model classes are mapped and they also have #Entity annotation. Then, why hibernate is not creating all the remaining tables?
Due to some cause I can't post the code of any of the model class here, I have searched a lot about the problem and applied many different solutions but didn't worked. Could someone please let me know that what could be the reasons that hibernate can react like this?
One of my model class which is not created in my DB.
#Entity
#Table(name = "fare_master")
public class FareMaster {
#Id
#Column(name = "fare_id")
#GeneratedValue
private int fareId;
#Column(name = "base_fare_amount")
private double baseFareAmount;
public int getFareId() {
return fareId;
}
public void setFareId(int fareId) {
this.fareId = fareId;
}
public double getBaseFareAmount() {
return baseFareAmount;
}
public void setBaseFareAmount(double baseFareAmount) {
this.baseFareAmount = baseFareAmount;
}
}
And mapping of the class is as follows
<mapping class="com.mypackage.model.FareMaster" />
Change hibernate.hbm2ddl.auto property to create-drop if you want to create tables, setting it to update will just allow you to update existing tables in your DB.
And check your log file to catch errors.
After a very long time efforts, I came to a conclusion that for this problem or similar type of other problems we should always follow the basic rules
1.) First, Be sure about your problem that means the Exact Issue causing this type of error.
2.) And to find out the exact issue use Logger in your application and you will definitely save lot of time.
In my case this is happening becasue I have switched my DB from MySql to PostGreSql and some of syntax in columnDefinition( a parameterin in #Column Annotation) was not compatible with my new DB. As I used again MySql everything worked fine.
If you have a schema.sql file under your projects, hibernate does not create tables.
Please remove it and try again.

H2 in Oracle compatibility mode is validated as H2, not oracle

In production I'm using Oracle and all my changelogs have been written with Oracle in mind.
In my development environment I'm trying to generate the changelogs on an H2 instance in Oracle compatibility mode.
This is to improve integration test speed.
My problem is that Liquibase is validating my changelogs against H2, not Oracle.
Is there a way of forcing Liquibase to validate against Oracle even though my db url looks like an H2 one?
My biggest headaches are regarding sequences and dropNotNullConstraint validations.
Liquibase version: 2.0.5 (I also tried with 3.1.1, same issue)
H2 connection url: jdbc:h2:tcp://localhost:9092/test;MODE=Oracle;AUTO_SERVER=TRUE;DB_CLOSE_DELAY=-1
I'm pretty sure this is a common scenario so I guess I'm probably doing something wrong?
any help would be greatly appreciated
Since Liquibase is implemented in Java and relies on JDBC I'll use Java for explanation.
Liquibase has a list of implemented databases. It depends how you call it from Java code but let's say you either use liquibase.database.DatabaseFactory, extend it or implement something similar. Usually your code would look something like this (example in Scala):
def createLiquibase(dbConnection: Connection, diffFilePath: String): Liquibase = {
val database = DatabaseFactory.getInstance.findCorrectDatabaseImplementation(new JdbcConnection(dbConnection))
val classLoader = classOf[SchemaMigration].getClassLoader
val resourceAccessor = new ClassLoaderResourceAccessor(classLoader)
new Liquibase(diffFilePath, resourceAccessor, database)
}
def updateDb(db: DbConnectionProvider, diffFilePath: String): Unit = {
val dbConnection = db.getConnection
val liquibase = createLiquibase(dbConnection, diffFilePath)
try {
liquibase.update(null)
} catch {
case e: Throwable => throw e
} finally {
liquibase.forceReleaseLocks()
dbConnection.rollback()
dbConnection.close()
}
}
Notice this part DatabaseFactory.getInstance.findCorrectDatabaseImplementation(new JdbcConnection(dbConnection)) where we pass in java.sql.Connection and Liquibase finds appropriate Database implementation for it. You can override findCorrectDatabaseImplementation or even create your own Database subclass altogether. Whatever you prefer.
The method in DatabaseFactory is public Database findCorrectDatabaseImplementation(DatabaseConnection connection) throws DatabaseException. From there you can learn more about what Database type is. You can inherit it from H2 or Oracle and override some parts.
If you use Liquibase cmd client you could do what I described above, build a jar file or such and then run from command line making sure your new classes on the classpath.
Compatibilty mode in H2 does not guarantee complete support of Oracle, Postgres, etc, so it's a bit dubious idea to test Oracle DML on it. It will probably work until you find when it doesn't.

Spring Database Integration Test, when to flush or?

I am fairly new to spring, and doing some integration tests.
Using Hibernate, MySql and Spring data JPA.
I am using transaction support and everything gets rolled back at the end of each test.
For example:
#Test (expected=DataIntegrityViolationException.class)
public void findAndDelete() {
UUID uuid = UUID.fromString(TESTID);
User user= iUserService.findOne(uuid);
iUserService.delete(cashBox);
iUserService.flush();
assertNull(iUserService.findOne(uuid));
}
In the above code, I call the iUserService.flush(), so that the sql gets sent to the DB, and an expected DataIntegrityViolationException occurs because there is a foreign key from User to another table (Cascade is not allowed, None). All good so far.
Now, if I remove the iUserService.flush()
then the expected exception does not occur because the sql does not get sent to the DB.
I tried adding the flush() into a teardown #After method, but that didn't work as the test does not see the exception outside of the test method.
Is there any way to avoid calling the flush within the test methods?
It would be preferable if the developers on my team did not have to use the flush method at all in their testing code
Edit:
I tried adding the following
#Before
public void before() {
Session session = entityManagerFactory.createEntityManager().unwrap(Session.class);
session.setFlushMode(FlushMode.ALWAYS);
}
but it does seem to flush the sqls, before each query.
In my humble opinion, it's better than the developers of your team know what they are doing.
It includes the things that are configured by default and the consequences of that.
Please, take a look to why you need avoid false positives when testing ORM code

Resources