I am using spring with a basic ( except for the credentials for the access, everything is default values ) PostgreSQL database and, using JPA, I get the expected Id increment when using #Id and #GeneratedValue for my #Entity but when I drop the entire table I noticed that the Id is incremented from the previous ( and deleted ) values.
Where are the Id values stored ?
From the Hibernate documentation for identifier generators:
AUTO (the default)
Indicates that the persistence provider (Hibernate) should choose an appropriate generation strategy.
You didn't list GenerationType as one of the annotations present, so it would default to AUTO. From the documentation for how AUTO works:
If the identifier type is numerical (e.g. Long, Integer), then Hibernate is going to use the IdGeneratorStrategyInterpreter to resolve the identifier generator strategy. The IdGeneratorStrategyInterpreter has two implementations:
FallbackInterpreter
This is the default strategy since Hibernate 5.0. For older versions, this strategy is enabled through the hibernate.id.new_generator_mappings configuration property. When using this strategy, AUTO always resolves to SequenceStyleGenerator. If the underlying database supports sequences, then a SEQUENCE generator is used. Otherwise, a TABLE generator is going to be used instead.
Postgres supports sequences, so you get a sequence. From a bit farther down in the same document:
The simplest form is to simply request sequence generation; Hibernate will use a single, implicitly-named sequence (hibernate_sequence) for all such unnamed definitions.
Hibernate asks Postgres to create a sequence. The sequence keeps track of what ids have been handed out, the database persists this internally. You should be able to get into the admin UI of the database and reset this sequence if you want.
To clarify, a database sequence is a database object independent of any tables (multiple tables can use the same sequence), so in general dropping a table won't affect any sequences. The exception is when you're using auto-increment, in which case there is an ownership relationship, and the sequence implementing the auto-increment is reset when the table is dropped.
It's a judgment call on Hibernate's part whether to make the default implementation of id generation use a sequence directly or auto-increment. If it used auto-increment you would see the values get recycled like you expected, but with the sequence there is no automatic reset.
Related
I have integrated hibernate envers in spring boot. Now my requirement is to have old value also for the particular column when value changed in *_AUD tables. However I cant see any feature available in Hibernate Envers plugin.
Please suggest.
Thanks
Unfortunately what you are looking to do just isn't someting supported.
It's one thing to think of an entity and needing to store basic type values such as strings or numeric data and have its old/new value represented by two columns in the audit table; however when you move beyond basic entity mappings to ones where you have relationships between entity types or collections; you begin to see that trying to store old/new data in the same row just isn't efficient and in some cases feasible.
That said, you can still read the audit history and deduce these old/new values using a variety of ways that include the Envers Query API, Debezium, or even basic database triggers.
I use eclipselink.ddl-generation.output-mode=database to generate the database schema. Can the generated SQL use character length semantics for generation of varchar2 fields? Enhance generated DDL of EclipseLink seems to related and an answer to this question would probably solve this.
There are two ways to change the SQL that is generated. The first would be to change the target database platform and is EclipseLink specific. This allows you to pick any database platform class, and you could override your database's platform to use what ever type definition you wanted, but this would be generic - ie all Strings would use a Varchar(255).
Another way is to change it using JPA annotations. The #Column annotation allows specifying the length which may or may not be used for DDL, as well as defining the columnDefinition which is used for DDL. Something like
#Column(name="..", columndefinition="Varchar2(255) NOT NULL")
I have some mappings, where business entities are being populated after transformation logic. The row volumes are on the higher side, and there are quite a few business attributes which are defaulted to certain static values.
Therefore, in order to reduce the data pushed from mapping, i created "default" clause on the target table, and stopped feeding them from the mapping itself. Now, this works out just fine when I am running the session in "Normal" mode. This effectively gives me target table rows, with some columns being fed by the mapping, and the rest taking values based on the "default" clause on the table DDL.
However, since we are dealing with higher end of volumes, I want to run my session in bulk mode (there are no pre-existing indexes on the target tables).
As soon as I switch the session to bulk mode, this particular feature, (of default values) stops working. As a result of this, I get NULL values in the target columns, instead of defined "default" values.
I wonder -
Is this expected behavior ?
If not, am I missing out on some configuration somewhere ?
Should I be making a ticket to Oracle ? or Informatica ?
my configuration -
Informatica 9.5.1 64 bit,
with
Oracle 11g r2 (11.2.0.3)
running on
Solaris (SunOS 5.10)
Looking forward to help here...
Could be expected behavior.
Seem that bulk mode in Informatica use "Direct Path" API in Oracle (see for example https://community.informatica.com/thread/23522 )
From this document ( http://docs.oracle.com/cd/B10500_01/server.920/a96652/ch09.htm , search Field "Defaults on the Direct Path") I gather that:
Default column specifications defined in the database are not
available when you use direct path loading. Fields for which default
values are desired must be specified with the DEFAULTIF clause. If a
DEFAULTIF clause is not specified and the field is NULL, then a null
value is inserted into the database.
This could be the reason of this behaviour.
I don't believe that you'll see a great benefit from not including the defaults, particularly in comparison to the benefits of a direct path load. If the data is going to be readonly then consider compression also.
You should also note that SQL*Net features compression for same values in the same column, so even in conventional path inserts the network overhead is not as high as you might think.
We have many tables in our database with autoincrement primary key ids setup the way they are in MySQL since we are in the process of migrating to Oracle from MySQL.
Now in oracle I recently learned that implementing this requires creating a sequence and a trigger on the id field for each such table. We have like 30 -40 tables in our schema and we want to avoid using database triggers in our product, since management of database is out of scope for our software appliance.
What are my options in implementing the auto increment id feature in oracle... apart from manually specifying the id in the code and managing it in the code which would change a lot of existing insert statements.
... I wonder if there is a way to do this from grails code itself? (by the way the method of specifying id as increment in domain class mapping doesnt work - only works for mysql)
Some info about our application environement: grails-groovy, hibernate, oracle,mysql support
This answer will have Grails/Hibernate handle the sequence generation by itself. It'll create a sequence per table for the primary key generation and won't cache any numbers, so you won't lose any identifiers if and when the cache times out. Grails/Hibernate calls the sequence directly, so it doesn't make use of any triggers either.
If you are using Grails hibernate will handle this for you automatically.
You can specify which sequence to use by putting the following in your domain object:
static mapping = {
id generator:'sequence', params:[sequence:'MY_SEQ']
}
I have a table and two databases which have the same table, but one is a symlink of the other one and only read is permitted on this table.
I have mapped the table to Java using Hibernate and I use spring to set the Entity Manager's data source as one of the two databases based on some input criteria.
I call only read only operations (selects) when I am connected to the second database, but it seems Hibernate tries to flush something back to the database and it fails telling update is not allowed on this view.
How do I disable this update only for the second datasource and keep it normal for the first one?
Update:
Looking at the stack trace, the flush seems to be started here:
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:365)
at org.hibernate.ejb.AbstractEntityManagerImpl$1.beforeCompletion(AbstractEntityManagerImpl.java:504)
... 55 more
Is this related to hibernate.transaction.flush_before_completion property? Can I set it to false for the second data source?
Most probably your entities become "dirty" the same moment they are loaded from the database, and Hibernate thinks that it needs to store the changes. This happens, if your accessors (get and set methods) are not returning the exact same value or reference that had been set by Hibernate.
In our code, this happened with lists, developers created new list instances because they didn't like the type they got in the setter.
If you don't want to change the code, change the mapping to field access.
You can also prevent Hibernate of storing changes by setting FlushMode to never on the session, but this only hides the real problem which will still occur in other situations an will lead to unnecessary updates.
First you need to determine if this is DDL or DML. If you don't know, then I recommend you set hibernate.show_sql=true to capture the offending statement.
If it is DDL, then it's most likely going to be Hibernate updating the schema for you and you'd want to additionally configure the hibernate.hbm2ddl.auto setting to be either "update" or "none", depending on whether you're using the actual db or the symlinked (read-only) one, respectivley. You can use "validate" instead of none, too.
If it is DML, then I would first determine whether your code is for some reason making a change to an instance which is still attached to an active Hibernate Session. If so, then a subsequent read may cause a flush of these changes without ever explicitly saving the object (Grails?). If this is the case, consider evicting the instance causing the flush ( or using transport objects instead ).
Are you perhaps using any aspects or Hibernate lifecycle events to provide auditing of the objects? This, too, could cause access of a read-only to result in an insert or update being run.
It may turn out that you need to provide alternative mappings for the offending class should the updatability of a field come into play, but the code is doing everything exactly as you'd like ( this is unlikely ;0 ). If you are in an all-annotation world, this may be tricky. If working with hbm.xml, then providing an alternative mapping is easier.