Hibernate bug using Oracle? - oracle

I've got the problem, that I use a property in the persistence.xml which forces Hibernate to look only for tables in the given schema.
<property name="hibernate.default_schema" value="FOO"/>
Because we are using now 4 different schemas the actual solution is to generate 4 war files with a modified persistence.xml.
That not very elegant.
Does anybody know, how I can configure the schema with a property or by manipulation the JDBC connection string?
I'm using Oracle 10g, 10_2_3 Patch.
Thanks a lot.

You could create four different users on the oracle database for the four different applications, the JDBC connection would include the user.
The for the user, you can create synonyms and permissions for the tables.
E.g.
create or replace synonym USER1.tablename FOR SCHEMA1.tablename;
create or replace synonym USER2.tablename FOR SCHEMA1.tablename;
create or replace synonym USER3.tablename FOR SCHEMA2.tablename;
And when you are accessing the tables from hibernate, just leave the schema off. When logged in as USER1, it'll use SCHEMA1, etc.
That way you don't have to create four different WAR files with four different persistence.xml files. Just deploy the app four times with different jdbc connections.
Hope that helps.

If you don't want to generate four different WARs then put this property in a hibernate.properties file and put that file on the class path (but outside the webapp) for each webapp.

See this - https://www.hibernate.org/429.html

I created a method called deduceSchema that I run when I'm setting up the SessionFactory. It opens a jdbc connection using the data source (because you don't have a Hibernate session yet) and queries "select user from dual" to get the logged in user. This will be accurate if the user you log in as also owns the tables. If not, I use a jndi environment variable to override.
Once I have the schema, I modify the Hibernate configuration to set it for each table although this is only necessary if the logged in user is different than the schema:
for (Iterator iter = configuration.getTableMappings(); iter.hasNext();) {
Table table = (Table) iter.next();
table.setSchema(schema);
}

Related

Is it possible to use Slick 3 for accessing different schemas within the same Database?

For a multi tenant application I need to create I want to evaluate how convenient is Slick for creating queries against Postgres different schemas (not to confuse with schema tables).
I'm having a hard time finding how to configure TableQuery to use dynamically the schema provided by the user. TableQuery[Users].resul should return different datasets depending on me querying tenant A or tenant B.
Is it possible with current Slick versions?
TableQuery itself will not need to be configured, as its methods only return queries and actions. Actions are run by a DatabaseDef instance, and that is what will need to be configured to access different schemas/databases/etc. Slick official documentation describes a simple way to create an instance of a DatabaseDef, which by default uses the Typesage Config library:
val db = Database.forConfig("mydb")
where "mydb" specifies a key in a property file Typesafe Config is looking at. You can create and manipulate Config instances programmatically as well, and create db instances from those. I suspect you will have to do something along the lines of creating a new Config instance (there is the convenient withValue() method to copy a Config and replace a config value at the specified key) and use that to create a new db instance for each new schema you are interested in querying.

Embeded H2 Database for dynamic files

In our application, we need to load large CSV files and fetch some data out of it. For example, getting the distinct values from the CSV file. For this, we decided to go with in-memory DB's like H2, as there is no need to store the data in persistent storage.
However, the file is so dynamic that the columns may not be the same. I need to load the file to the H2 database to a table that is temporary for that session.
Tech Stack is Spring boot and H2.
The examples I see on forums is using a standard entity that knows what fields the table has. However my case the table columns will be dynamic
I tried the below in spring boot
public interface ImportCSVRepository extends JpaRepository<Object, String>
with
#Query(value = "CREATE TABLE TEST AS SELECT * FROM CSVREAD('test.csv');", nativeQuery = true)
But this gives unmanaged entity error. I understand why the error is thrown. However I am not sure how to achieve this. Also please clarify if I should use Spring-batch ?
You can use JdbcTemplate to manually create tables and query/update the data in them.
An example of how to create a table with JdbcTemplate
Dynamically creating tables and defining new entities (or modifying existing ones) is hardly possible with spring-data repositories and #Entity-ies. You probably should also check some NoSQL dbs like MongoDb - it's easier to define documents (or key-value objects - Redis) with dynamic structures in them.

how create datasource programmatically in spring batch?

i want to copy many data from serveral db which on diff machine to a centre db.
i think the spring batch may be a choice to fit my requirement.
so. should be make a lot of job to accomplish the whole task, the jobs will like this:
job A: copy from db1 to db111;
job B: copy from db2 to db111;
job C: copy form db3 to db111;
etc...
and the tables in db1, db2, db3...is quite different.
so far, i know how to create datasources at spring boot startup time, but i don't know how to create datasource in job instance at runtime. is any idea about this? (if can support spring data jpa will be better)
or is any other way better then spring batch?
thanks.
A datasource is a set of connections to a DB so in your scenario , there are just multiple kinds of DBs or multiple DBs of same kind & for both scenarios - you will have to create one datasource for each db & then use that in piece of code wherever you need it.
Step 1 - So you write one configuration class for each database to set up one datasource. At propertly file level, you won't be able to use default properties but your custom ones where you prefix properties with db names to distinguish.
You need to define transaction managers etc for each datasource & you uniquely name each datasource.
Step 2 : Next step is to use appropriate datasource with appropriate dao classes. In above configuration class, if you use JPA , those configs would already be there including entity packages, repository packages etc etc. JdbcTemplate takes datasource in constructor etc.
All in All - scenario is similar to single datasource scenario & you will have to set up all datasources in advance at app start up but with appropriate qualified bean names & then use those data-sources wherever you need.
This Answer is what works for me

Non XA transaction for multiple schema's on the same instance

Currently I am using Weblogic with Oracle.
I have one instance of Oracle DB and two legacy schemas so I use tow datasources.
To keep transactionality I use XA but from time to time there are HeuristicExceptions thrown causing some inconsistency on data level
Now because it is the same instance is not possible somehow not to use XA and define a datasource that has access to both schemas ?
In this way i will not use XA anymore and avoid having data inconsitency.
Thanks
Do not use dblink. It is overkill. And also this might not be related to XA. Best solution is to use tables from both schemas from a single datasource. Either prefix tables in your queries by schema name, or create synonyms in one schema pointing onto tables in the other schema.
It is only matter database privileges. No need to deal with XA nor dblinks.
One db user need to have grants to manipulate tables in both schemas.
PS: you can use distributed transactions on connections pointing into the same database. If you insist on it. But in your case, there no need for that.
You can connect one schema and create a DBLink for the other to give access to the second. I think that transaction will work through both schema.
http://docs.oracle.com/cd/B28359_01/server.111/b28310/ds_concepts004.htm

Rewrite PK and related FK based on an oracle sequence

I want to migrate a subset of customer data from one shared database environment to another shared database environment. I use hibernate and have quite a few ID and FK_ID columns which are auto generated from an oracle sequence.
I have a liquibase change log that I exported from jailer which has the customer specific data.
I want to be able to rewrite all of the sequence ID columns so that they don't clash with what's already in the target database.
I would like to avoid building something that my company has to manage, and would prefer to upstream this to liquibase.
Is anyone aware of anything within liquibase that might be a good place to start.
I would like to either do this on the liquidbase xml before passing it to 'update' command, or as part of the update command itself. Ideally as part of the update command itself.
I am aware that I would need to make liquibase aware of which columns are PK sequence columns and the related FK columns. The database structure does have this all well defined, so I should be able to read this into the update process.
Alternatively I had thought I could use the extraction model csv from jailer
Jailer - http://jailer.sourceforge.net/
I would suggest that for one-time data migrations like this, Liquibase is not the best tool. It is really better for schema management rather than data management. I think that an ETL tool such as Pentaho would be a better solution.
I actually managed to figure it out for myself with the command line 'update' command of liquibase by using a custom change exec listener.
1) I pushed a MR to liquibase to allow registration of a change exec listener
2) I implemented my own change exec listener that intercepts each insert statement and rewrites each FK and PK field to one that is not as yet allocated in the target database. I achieve this by using a oracle sequence. In order to avoid having to go back to the database each time for a new sequence, I implemented my own version of the hibernate sequence caching
https://github.com/liquibase/liquibase/pull/505
https://github.com/pellcorp/liquibase-extensions
This turned out to be quite a generic solution and in concert with some fixes upstreamed to jailer to improve the liquibase export support its a very viable and reusable solution.
Basic workflow is:
1) Export a subset of data from source db using jailer to liquibase xml
2) Run the liquibase update command, with the custom exec change listener against the target.
3) TODO Run the jailer export on the target db and compare with the original source data.

Resources