H2 in Oracle compatibility mode is validated as H2, not oracle - oracle

In production I'm using Oracle and all my changelogs have been written with Oracle in mind.
In my development environment I'm trying to generate the changelogs on an H2 instance in Oracle compatibility mode.
This is to improve integration test speed.
My problem is that Liquibase is validating my changelogs against H2, not Oracle.
Is there a way of forcing Liquibase to validate against Oracle even though my db url looks like an H2 one?
My biggest headaches are regarding sequences and dropNotNullConstraint validations.
Liquibase version: 2.0.5 (I also tried with 3.1.1, same issue)
H2 connection url: jdbc:h2:tcp://localhost:9092/test;MODE=Oracle;AUTO_SERVER=TRUE;DB_CLOSE_DELAY=-1
I'm pretty sure this is a common scenario so I guess I'm probably doing something wrong?
any help would be greatly appreciated

Since Liquibase is implemented in Java and relies on JDBC I'll use Java for explanation.
Liquibase has a list of implemented databases. It depends how you call it from Java code but let's say you either use liquibase.database.DatabaseFactory, extend it or implement something similar. Usually your code would look something like this (example in Scala):
def createLiquibase(dbConnection: Connection, diffFilePath: String): Liquibase = {
val database = DatabaseFactory.getInstance.findCorrectDatabaseImplementation(new JdbcConnection(dbConnection))
val classLoader = classOf[SchemaMigration].getClassLoader
val resourceAccessor = new ClassLoaderResourceAccessor(classLoader)
new Liquibase(diffFilePath, resourceAccessor, database)
}
def updateDb(db: DbConnectionProvider, diffFilePath: String): Unit = {
val dbConnection = db.getConnection
val liquibase = createLiquibase(dbConnection, diffFilePath)
try {
liquibase.update(null)
} catch {
case e: Throwable => throw e
} finally {
liquibase.forceReleaseLocks()
dbConnection.rollback()
dbConnection.close()
}
}
Notice this part DatabaseFactory.getInstance.findCorrectDatabaseImplementation(new JdbcConnection(dbConnection)) where we pass in java.sql.Connection and Liquibase finds appropriate Database implementation for it. You can override findCorrectDatabaseImplementation or even create your own Database subclass altogether. Whatever you prefer.
The method in DatabaseFactory is public Database findCorrectDatabaseImplementation(DatabaseConnection connection) throws DatabaseException. From there you can learn more about what Database type is. You can inherit it from H2 or Oracle and override some parts.
If you use Liquibase cmd client you could do what I described above, build a jar file or such and then run from command line making sure your new classes on the classpath.
Compatibilty mode in H2 does not guarantee complete support of Oracle, Postgres, etc, so it's a bit dubious idea to test Oracle DML on it. It will probably work until you find when it doesn't.

Related

Hibernate does not create table?

I am working on a spring + hibernate based project. Actually, A project is given to me with Simple Spring Web Maven Structure (Spring Tool Suit as IDE).
I have successfully imported the project into my STS IDE and have also changed some of hibernate configuration properties so that application will be able to talk to my local PostGreSQL server.
The changes that I have made are as given below:
jdbc.driverClassName=org.postgresql.Driver
jdbc.dialect=org.hibernate.dialect.PostgreSQLDialect
jdbc.databaseurl=jdbc:postgresql://localhost:5432/schema
jdbc.username=username
jdbc.password=password
The hibernate.hbm2ddl.auto property is already set to update so I didn't change that.
Then I simply deploy my project to Pivotal Server and hibernate get executed and creates around 36 tables inside my DB schema. Looks fine !
My Problem: In my hibernate.cfg.XML file total 100 Java model classes are mapped and they also have #Entity annotation. Then, why hibernate is not creating all the remaining tables?
Due to some cause I can't post the code of any of the model class here, I have searched a lot about the problem and applied many different solutions but didn't worked. Could someone please let me know that what could be the reasons that hibernate can react like this?
One of my model class which is not created in my DB.
#Entity
#Table(name = "fare_master")
public class FareMaster {
#Id
#Column(name = "fare_id")
#GeneratedValue
private int fareId;
#Column(name = "base_fare_amount")
private double baseFareAmount;
public int getFareId() {
return fareId;
}
public void setFareId(int fareId) {
this.fareId = fareId;
}
public double getBaseFareAmount() {
return baseFareAmount;
}
public void setBaseFareAmount(double baseFareAmount) {
this.baseFareAmount = baseFareAmount;
}
}
And mapping of the class is as follows
<mapping class="com.mypackage.model.FareMaster" />
Change hibernate.hbm2ddl.auto property to create-drop if you want to create tables, setting it to update will just allow you to update existing tables in your DB.
And check your log file to catch errors.
After a very long time efforts, I came to a conclusion that for this problem or similar type of other problems we should always follow the basic rules
1.) First, Be sure about your problem that means the Exact Issue causing this type of error.
2.) And to find out the exact issue use Logger in your application and you will definitely save lot of time.
In my case this is happening becasue I have switched my DB from MySql to PostGreSql and some of syntax in columnDefinition( a parameterin in #Column Annotation) was not compatible with my new DB. As I used again MySql everything worked fine.
If you have a schema.sql file under your projects, hibernate does not create tables.
Please remove it and try again.

Spring Data Cassandra and Map of Maps

I have a Cassandra table defined like so:
create table foo (id int primary key, mapofmaps map<text, frozen<map<text, int>>>);
Into which I place some data:
insert into foo (id, mapofmaps) values (1, {'pets'; {'dog'; 42, 'cat'; 7}, 'foods': {'taco': 555, 'cake', '721'}});
I am then trying to use spring-data-cassandra to interact with it. I have a POJO:
#Table
public class Foo {
#PrimaryKey
private Integer id;
#Column("mapofmaps")
private Map<String, Map<String, Integer>> mapOfMaps;
// getters/setters omitted for brevity
}
And a Repository:
public interface FooRepository extends CassandraRepository<Foo> {
}
And then the following code to try and retrieve all the records as a simple test:
public Iterable<Foo> getAllFoos() {
return fooRepository.findAll();
}
Unfortunately this throws an exception. Less funky column types work OK, e.g. a List<String> and non-nested `Map type columns work fine. But this map of maps is not working for me.
Wondering if there is no support for this in spring-data-cassandra (though the exception appears to be in the DataStax code) or whether I just need to do something different with the POJO.
The exception thrown is as follows:
Caused by: java.lang.NullPointerException: null
at com.datastax.driver.core.TypeCodec$MapCodec.deserialize(TypeCodec.java:821)
at com.datastax.driver.core.TypeCodec$MapCodec.deserialize(TypeCodec.java:775)
at com.datastax.driver.core.ArrayBackedRow.getMap(ArrayBackedRow.java:299)
at org.springframework.data.cassandra.convert.ColumnReader.get(ColumnReader.java:53)
I dont know about the spring cassandra framework bit you can access the data using the datastax driver directly.
https://github.com/datastax/java-driver
I did some digging and the spring framework uses the java driver so there must be a cluster object already instantiated that you can leverage if the maps functionality you need is not exposed by spring-cassandra. Functionality could probably be added.
OK, what #phact said is not the answer is was looking for, but it did set me on the path to figuring things out.
As per my own comment on my original post, it appeared that this was a datastax driver issue, not a spring-data-cassandra issue. And that was borne out when I wrote a small test harness to try and query the problem table w/just the datastax client. I picked v2.1.7.1 of cassandra-driver-core and was able to query the table w/the map of maps fine.
Looking at the version of the driver that v1.3.0 of spring-data-cassandra brings in it's older, v2.0.4. A bit of maven dependency malarkey and I had my spring-data-cassandra project using a newer datastax driver and everything works fine.

Why do I get errors in Datanucleus when spatial extensions are added

I am attempting to use Datanucleus with the datanucleus-spatial plugin. I am using annotations for my mappings. I'm am attempting with both PostGIS and Oracle spatial. I am going back to the tutorials from datanucleus. What I'm experiencing doesn't make any sense. My development environment is Netbeans 7.x (I've attempted 7.0, 7.2, and 7.3) with MAven 2.2.1. Using the Position class in Datanucleus's tutorial found at http://www.datanucleus.org/products/datanucleus/jdo/guides/spatial_tutorial.html, I find that if I do not include the datanucleus-spatial plugin in my Maven dependencies, it connects to PostGIS or Oracle no problem, and commits the data, the spatial data being stored as a blob (I expected this since not spatial plugins are present). Using PostGIS, the tutorial works just fine.
I modify the Position class by replacing the org.postgis.Point class with oracle.spatial.geometry.JGeometry and point my connection to a Oracle server. Without spatial, again the point is stored as a blob. With spatial I get the following exception:
java.lang.ClassCastException: org.datanucleus.store.rdbms.datasource.dbcp.PoolingDataSource$PoolGuardConnectionWrapper cannot be cast to oracle.jdbc.OracleConnection
The modified class looks like the following:
#PersistenceCapable
public class Position
{
#PrimaryKey
private String name;
#Persistent
private JGeometry point;
public Position(String name, double x, double y)
{
this(name, JGeometry.createPoint(new double[]{x, y}, 2, 4326));
}
public Position(String name, JGeometry point)
{
this.name = name;
this.point = point;
}
public String getName()
{
return name;
}
public JGeometry getPoint()
{
return point;
}
#Override
public String toString()
{
return "[name] "+ name + " [point] "+point;
}
}
Is there something I'm missing in the fabulous world of DataNucleus Spatial? Why does it fail whenever spatial is added? Do I need the JDO xml file even though I'm annotating? Are there annotations not presented in the tutorial? If the jdo xml file shown in the tutorial is required and the reason I'm getting these errors, where do I put it? I'm currently 3 weeks behind on my project and am about to switch to Hibernate if this is not fixed soon.
You don't present a stack trace, so impossible to tell other than it is DBCP causing the problem, and you could easily enough use any of the other connection pools that are supported. If some Oracle "Connection" object cannot be cast to some other JDBC connection then maybe the Oracle JDBC driver is for a different version of JDBC than what this version of DBCP is ? (and some versions of JDBC break backwards compatibility). No info in the post is provided to confirm or rule that out (the log tells you some of that). As already said, there are ample other connection pools available.
The DN Spatial Tutorial is self-contained, and has Download and GitHub links, and that defines where a JDO XML file would go if using it. The tutorial, as provided, works
Finally, this may be worth a read ...
In order to avoid cannot be cast to oracle.jdbc.OracleConnection error I suggest you to use datanucleus-geospatial 3.2.7 version that can be found on central maven repository.

Spring Database Integration Test, when to flush or?

I am fairly new to spring, and doing some integration tests.
Using Hibernate, MySql and Spring data JPA.
I am using transaction support and everything gets rolled back at the end of each test.
For example:
#Test (expected=DataIntegrityViolationException.class)
public void findAndDelete() {
UUID uuid = UUID.fromString(TESTID);
User user= iUserService.findOne(uuid);
iUserService.delete(cashBox);
iUserService.flush();
assertNull(iUserService.findOne(uuid));
}
In the above code, I call the iUserService.flush(), so that the sql gets sent to the DB, and an expected DataIntegrityViolationException occurs because there is a foreign key from User to another table (Cascade is not allowed, None). All good so far.
Now, if I remove the iUserService.flush()
then the expected exception does not occur because the sql does not get sent to the DB.
I tried adding the flush() into a teardown #After method, but that didn't work as the test does not see the exception outside of the test method.
Is there any way to avoid calling the flush within the test methods?
It would be preferable if the developers on my team did not have to use the flush method at all in their testing code
Edit:
I tried adding the following
#Before
public void before() {
Session session = entityManagerFactory.createEntityManager().unwrap(Session.class);
session.setFlushMode(FlushMode.ALWAYS);
}
but it does seem to flush the sqls, before each query.
In my humble opinion, it's better than the developers of your team know what they are doing.
It includes the things that are configured by default and the consequences of that.
Please, take a look to why you need avoid false positives when testing ORM code

For test suite, replacing MYSQL by in memory HSQLDB is not working

We have an application where we use Struts, Spring, and hibernate.
Previously, we were using mysql databse for running test suites using testng framework.
Now we want to use “in memory” database of HSQLDB.
We have made all the required code changes to use HSQLDB in “in memory” mode.
For ex.
Datasource url = jdbc:hsql:mem:TEST_DB
Username = sa
Password =
Driver = org.hsqldb.jdbcDriver
Hibernate dialect= org.hibernate.dialect.HSQLDialect
Hibernate.hbm2ddl.aoto = create
#Autowired
private DriverManagerDataSource dataSource;
private static Connection dbConnection;
private static IDatabaseConnection dbUnitConnection;
private static IDataSet dataSet;
private MockeryHelper mockeryHelper;
public void startUp() throws Exception {
mockeryHelper = new MockeryHelper();
if (dbConnection == null) {
dbConnection = dataSource.getConnection();
dbUnitConnection = new DatabaseConnection(dbConnection);
dbUnitConnection.getConfig().setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new HsqldbDataTypeFactory());
dataSet = new XmlDataSet(new FileInputStream("src/test/resources/test-data.xml"));
}
DatabaseOperation.CLEAN_INSERT.execute(dbUnitConnection, dataSet);
}
We have done required code changes to our base class where we do startup and teardown of database before and after each test.
We use test-data.xml file from where we insert test data to created database using testng framework. Now my questions are
1.when I run test case, database gets created and data is also inserted correctly. However, my respective daos return empty object list when I try to retrieve them from interceptors of struts.
2.We use HSQLDB version 1.8.0.10. Same configurations are made for other project. In that project, most of the test cases are running with success, however for some of them sorting order of data is incorrect.
We discovered that HSQLDB is case sensitive for sorting. And there is one property sql.ignore_case, when set to true, sorting becomes case insensitive. But this is not working for us.
Can someone please help in this?
Thanks in adavance.
I'm afraid sql.ignore_case is not available in your HSQLDB version, as it's not even in the last stable (2.2.9), contrary to what the docs say. However latest snapshots, as stated in this thread, do include it. I'm not using 1.8 my self, but executing SET IGNORECASE TRUE before any table creation may work for you, it does in 2.2.9. If you really need 1.8, a third option may be to pick the relevant code from latest source, add it to 1.8 source and recompile, no idea how hard/easy this could be.

Resources