managing/implementing auto-increment primary key in oracle without triggers - oracle

We have many tables in our database with autoincrement primary key ids setup the way they are in MySQL since we are in the process of migrating to Oracle from MySQL.
Now in oracle I recently learned that implementing this requires creating a sequence and a trigger on the id field for each such table. We have like 30 -40 tables in our schema and we want to avoid using database triggers in our product, since management of database is out of scope for our software appliance.
What are my options in implementing the auto increment id feature in oracle... apart from manually specifying the id in the code and managing it in the code which would change a lot of existing insert statements.
... I wonder if there is a way to do this from grails code itself? (by the way the method of specifying id as increment in domain class mapping doesnt work - only works for mysql)
Some info about our application environement: grails-groovy, hibernate, oracle,mysql support

This answer will have Grails/Hibernate handle the sequence generation by itself. It'll create a sequence per table for the primary key generation and won't cache any numbers, so you won't lose any identifiers if and when the cache times out. Grails/Hibernate calls the sequence directly, so it doesn't make use of any triggers either.

If you are using Grails hibernate will handle this for you automatically.
You can specify which sequence to use by putting the following in your domain object:
static mapping = {
id generator:'sequence', params:[sequence:'MY_SEQ']
}

Related

Where does spring/hibernate store the Id?

I am using spring with a basic ( except for the credentials for the access, everything is default values ) PostgreSQL database and, using JPA, I get the expected Id increment when using #Id and #GeneratedValue for my #Entity but when I drop the entire table I noticed that the Id is incremented from the previous ( and deleted ) values.
Where are the Id values stored ?
From the Hibernate documentation for identifier generators:
AUTO (the default)
Indicates that the persistence provider (Hibernate) should choose an appropriate generation strategy.
You didn't list GenerationType as one of the annotations present, so it would default to AUTO. From the documentation for how AUTO works:
If the identifier type is numerical (e.g. Long, Integer), then Hibernate is going to use the IdGeneratorStrategyInterpreter to resolve the identifier generator strategy. The IdGeneratorStrategyInterpreter has two implementations:
FallbackInterpreter
This is the default strategy since Hibernate 5.0. For older versions, this strategy is enabled through the hibernate.id.new_generator_mappings configuration property. When using this strategy, AUTO always resolves to SequenceStyleGenerator. If the underlying database supports sequences, then a SEQUENCE generator is used. Otherwise, a TABLE generator is going to be used instead.
Postgres supports sequences, so you get a sequence. From a bit farther down in the same document:
The simplest form is to simply request sequence generation; Hibernate will use a single, implicitly-named sequence (hibernate_sequence) for all such unnamed definitions.
Hibernate asks Postgres to create a sequence. The sequence keeps track of what ids have been handed out, the database persists this internally. You should be able to get into the admin UI of the database and reset this sequence if you want.
To clarify, a database sequence is a database object independent of any tables (multiple tables can use the same sequence), so in general dropping a table won't affect any sequences. The exception is when you're using auto-increment, in which case there is an ownership relationship, and the sequence implementing the auto-increment is reset when the table is dropped.
It's a judgment call on Hibernate's part whether to make the default implementation of id generation use a sequence directly or auto-increment. If it used auto-increment you would see the values get recycled like you expected, but with the sequence there is no automatic reset.

Truncate all database tables with Hibernate and Spring Boot

I need to truncate all database tables after each test. Is there a way to do so or at least a database agnostic way to get all table names so that they can be truncated.
Any other alternatives are welcome. But keep in mind #Transactional and #Rollback will not help as I'm dealing with integration tests which fire http request on the server.
I think you're going to struggle to truncate tables in a simple, database-agnostic way. For example, what do you do about foreign key constraints? Some DBs will let you just truncate the tables in the correct order, leaving you with the problem of how to define that order. But if I recall correctly, some won't let you truncate tables with foreign key constraints at all, even if empty. Then you need to use some DB-specific DDL to disable the constraints, or worse, drop and recreate them.
You are also ruling out parallelising your integration tests if you take this approach.
I've always found a better approach is to make each test clear up just the data that it created. For example, for your create API, you may be able to register a listener that records the IDs of all created entities in your test code, then on teardown you can just reverse iterate this list of IDs, calling your delete API. The downside of this approach is that you may need to implement APIs that your application doesn't actually need, just to support the tests. However these can then be disabled by a flag on deployment to production.
I read this property from a text file and add the 2 hibernate properties in the if statement which rebuilds the database every time I execute my project, perhaps this can help you?
if (environment.getProperty("dbReset").compareTo("ON") == 0)
{
properties.put("hbm2ddl.auto", "create");
properties.put("hibernate.hbm2ddl.auto", "create");
}

Rewrite PK and related FK based on an oracle sequence

I want to migrate a subset of customer data from one shared database environment to another shared database environment. I use hibernate and have quite a few ID and FK_ID columns which are auto generated from an oracle sequence.
I have a liquibase change log that I exported from jailer which has the customer specific data.
I want to be able to rewrite all of the sequence ID columns so that they don't clash with what's already in the target database.
I would like to avoid building something that my company has to manage, and would prefer to upstream this to liquibase.
Is anyone aware of anything within liquibase that might be a good place to start.
I would like to either do this on the liquidbase xml before passing it to 'update' command, or as part of the update command itself. Ideally as part of the update command itself.
I am aware that I would need to make liquibase aware of which columns are PK sequence columns and the related FK columns. The database structure does have this all well defined, so I should be able to read this into the update process.
Alternatively I had thought I could use the extraction model csv from jailer
Jailer - http://jailer.sourceforge.net/
I would suggest that for one-time data migrations like this, Liquibase is not the best tool. It is really better for schema management rather than data management. I think that an ETL tool such as Pentaho would be a better solution.
I actually managed to figure it out for myself with the command line 'update' command of liquibase by using a custom change exec listener.
1) I pushed a MR to liquibase to allow registration of a change exec listener
2) I implemented my own change exec listener that intercepts each insert statement and rewrites each FK and PK field to one that is not as yet allocated in the target database. I achieve this by using a oracle sequence. In order to avoid having to go back to the database each time for a new sequence, I implemented my own version of the hibernate sequence caching
https://github.com/liquibase/liquibase/pull/505
https://github.com/pellcorp/liquibase-extensions
This turned out to be quite a generic solution and in concert with some fixes upstreamed to jailer to improve the liquibase export support its a very viable and reusable solution.
Basic workflow is:
1) Export a subset of data from source db using jailer to liquibase xml
2) Run the liquibase update command, with the custom exec change listener against the target.
3) TODO Run the jailer export on the target db and compare with the original source data.

Update H2Database schema with ORMLite

I am using H2Database With ORMLite. we have 60 tables all created with ORMLite "create if not exists", Now we are going to provide a major release and requirement is to update old version database. But I need to know how to do this with ormLite as in new version some of Tables will be new and some is existing old tables with some modifications e.g we have an table of job in previous version db, in this release we added 2 more columns and change the datatype of one column. any suggestions. I have seen some other posts regarding OrmLite for Android SqlLite. How can this approach be used for other db. e.g Like this post
ORMLite update of the database
But I need to know how to do this with ormLite as in new version some of Tables will be new and some is existing old tables with some modifications e.g we have an table of job in previous version db, in this release we added 2 more columns and change the datatype of one column.
I'm not sure there is any easy answer here. ORMLite doesn't directly provide any magic capabilities to make the migration of data any easier. Here are some thoughts however:
You will need to use some sort of SQL logic to determine whether your application has the "old" or "new" schema installed. You could use raw SQL to look for the existance of particular tables or columns. Might be a good idea going forward to store a meta table with database version which Android gets for free.
You can create new and old versions of each of your entities (OldAccount versus Account) and map them both to the same table with the #DatabaseTable(tableName = "accounts"). Then you can read the old entities using the oldAccountDao.iterator(), convert them to new entities and (as long as you aren't mucking with the primary key) update them using the new accountDao.update(...).
You can certain come up with a series of SQL statements that will need to be performed in the proper order to change the schema. Then call the dao.exectuteRaw(...) with them in order.
Obviously the new entities will just be created.
You might want to consider dumping a backup file of all tables somewhere before the conversion process and telling the user about it in case there is some failure so your users could revert and run the old version of your application.
Hopefully something here is helpful.

Using dynamic lookup from parallel sessions with synchronized cache in Informatica

Using Informatica 9.1.0
Scenario
Get the Dimension key generated and inserted to the Fact table from the Fact load.
I have to load the Fact table with a dimension key along with other columns. This dimension record is created from within the same mapping. There are five different sessions using the same mapping and executes simultaneously to load the Fact table. In this case I'm using a dynamic lookup with 'Synchronize dynamic cache' enabled to get unique dimension records generated from the 5 sessions using some conditions. The dimension ID is generated using the Sequence-ID in associated expression of the lookup. When a single session alone is run it worked perfectly fine. But when the sessions were run parallely it started to show unique key violation error as random sessions tried to insert the same sequence which was already there.
To fix the issue I had to give persistent lookup cache enabled and Cache file name prefix. But I did not find this solution or this issue in any of the forums or in INFA communities. So not sure this is the right way of doing it or this is a bug of some kind.
Please let me know if you had similar issue or some different thoughts.
Thanks in advance
One other possible solution I can think of is to have the database generate a sequence instead of using Informatica's sequencer. The database should be capable of avoiding any unique key violations.

Resources