I'm facing an annoying problem, which I think it's really stupid.
I'm using an H2 database with JPA and Hibernate.
This is my save method on ScheduleService.class:
#Transactional(propagation = Propagation.REQUIRES_NEW)
public int save(Schedule sched) {
try {
scheduleRepository.save(sched);
} catch(Exception e) {...}
}
The sentence "scheduleRepository.save(sched)" is throwing the following exception:
org.springframework.dao.DataIntegrityViolationException: could not execute statement; SQL [n/a]; constraint ["UK_4R97H1MMVBOG9TD05OH9FKL16_INDEX_5 ON PUBLIC.SCHEDULE(CHANNEL, PROGRAMME, START, END) VALUES ( /* key:65574 */ null, TIMESTAMP '2015-09-15 17:00:00.0', null, TIMESTAMP '2015-09-15 16:40:00.0', 3, 13)"; SQL statement:
insert into schedule (channel, end, programme, published, start, idSched) values (?, ?, ?, ?, ?, ?) [23505-175]]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute statement
And printing to the output console the following message:
Unique index or primary key violation: "UK_4R97H1MMVBOG9TD05OH9FKL16_INDEX_5 ON PUBLIC.SCHEDULE(CHANNEL, PROGRAMME, START, END) VALUES ( /* key:65574 */ null, TIMESTAMP '2015-09-15 17:00:00.0', null, TIMESTAMP '2015-09-15 16:40:00.0', 3, 13)"; SQL statement:
insert into schedule (channel, end, programme, published, start, idSched) values (?, ?, ?, ?, ?, ?) [23505-175]
I've removed my #UniqueConstraint annotation in Schedule entity to make sure it is not a unique constraint problem.
The error persists, so it must be caused by a primary key violation.
This is how I define the primary key of Schedule entity:
#Id
#Column(name = "idSched", nullable = false)
#GeneratedValue(strategy = GenerationType.TABLE)
protected Long idSched;
It should be assigning a new unique idSched field for every row is inserted in the table, right?
What could be the problem which is causing the exception?
Thank you.
UPDATE:
Based on Bohuslav suggestion, I've changed the primary key declaration as this:
#Id
#TableGenerator(
name="scheduleGen",
table="schedule_sequence_table",
pkColumnValue="idSched",
allocationSize=1)
#GeneratedValue(strategy = GenerationType.TABLE, generator="scheduleGen")
#Column(name = "idSched", nullable = false)
protected Long idSched;
For the moment it seems to work fine.
UPDATE 2: Based on Sreenath comment:
I'm generating the tables only once (they're persisted as files on disk). Actually there was a unique key constraint on schedule table. However, since it was throwing that exception, I removed the constraint and removed database files to make sure the problem was not in the unique constraint but in the primary key instead. Even without the unique constraint and a brand new database, the exception still remained, so then I came with the #TableGenerator solution. Apparently that solved the problem.
But then I put my unique constraint again, removed database files so tables can be regenerated, and started and stopped the application a few times.
Now the problem has come again.
This is my unique constraint:
#Table(uniqueConstraints={#UniqueConstraint(columnNames = {"channel", "programme", "start", "end"})})
#Entity(name = "schedule")
public class Schedule { ...}
Of course, before I save an entity, I check if there is any other entity with the same unique members as the one I'm adding:
Schedule schedIn = schedRep.findOneByChannelAndProgrammeAndStartAndEnd(
sched.getChannel(), sched.getProgramme(), sched.getStart(), sched.getEnd());
// Check if there exists any schedule with the same {channel, programme, start, end} values
if (schedIn == null) {
// If not, then save it
schedRep.save(sched);
}
The method schedRep.findOneByChannelAndProgrammeAndStartAndEnd() is automatically generated by JPA. So that's weird, that query cannot find any schedule with the same {channel, programme, start, end} values, but when I try to save them, the unique constraint is violated.
Where could be the problem then?
Related
I am using gocql with my Go application and trying to solve the issue described below.
CREATE TABLE IF NOT EXISTS website.users (
id uuid,
email_address text,
first_name text,
last_name text,
created_at timestamp,
PRIMARY KEY (email_address)
);
This query is going to override matching record which is Cassandra's expected behaviour.
INSERT INTO users (id, email_address, first_name, last_name, created_at)
VALUES (?, ?, ?, ?, ?)
In order to prevent overriding the existing record we can use IF NOT EXISTS at the end of the query.
INSERT INTO users (id, email_address, first_name, last_name, created_at)
VALUES (?, ?, ?, ?, ?)
IF NOT EXISTS
However, there is no way for me to know if the query affected any rows in DB or not. Somehow I need to return something like "Record exists" message back to caller but it is currently not possible. If there was something specific with session.Query(...).Exec() it would be useful but there isn't as far as I know.
I was thinking to SELECT by email_address before proceeding with INSERT if there was no matching record but as you can guess this is not feasible because by the time I INSERTed a new record after SELECT, some other operation could have INSERTed a new record with same email address.
How do we handle such scenario?
The solution is to use ScanCAS and the test case example from the library is here.
NOTE:
Order of the fields in ScanCAS() should match cqlsh> DESCRIBE keyspace.users; output for the CREATE TABLE ... block.
If you don't care about the scanned fields, prefer MapScanCAS instead.
func (r Repository) Insert(ctx context.Context, user User) error {
var (
emailAddressCAS, firstNameCAS, idCAS, lastNameCAS string
createdAtCAS time.Time
)
query := `
INSERT INTO users (email_address, created_at, first_name, id, last_name)
VALUES (?, ?, ?, ?, ?) IF NOT EXISTS
`
applied, err := r.session.Query(
query,
user.EmailAddress,
user.CreatedAt,
user.FirstName,
user.LastName,
user.CreateAt,
).
WithContext(ctx).
ScanCAS(&emailAddressCAS, &createdAtCAS, &firstNameCAS, &idCAS, &lastNameCAS)
if err != nil {
return err
}
if !applied {
// Check CAS vars here if you want.
return // your custom error implying a duplication
}
return nil
}
If you're using INSERT with IF NOT EXISTS, then in contrast to "normal" inserts that doesn't return anything, such query returns a single row result consisting of:
field with name [applied], and true value - if there was no record before, and new row was inserted
field with name [applied], and false value + all columns of existing row.
So you just need to get result of your insert query, and analyze it. See documentation for more details.
Here my problem, I have a sequence in my oracle database and a trigger on insert to fetch the next value from the sequence and put it as id. With a tool like sql developer, it works perfectly.
My id is defined at this
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "MY_SEQUENCE")
#SequenceGenerator(sequenceName = "MY_SEQUENCE", allocationSize = 1, name = "MY_SEQUENCE")
private BigInteger jobId;
The problem is hibernate firstly read the next value of the sequence, set it as the id and then persist it. Then my database update my id with the next value of the sequence but that new id isn't "updated" in my code after my .save(entity).
I read that I should use the GenerationType.IDENTITY but I would like to do batch inserts and I also read that with IDENTITY the batch inserts is not possible.
If possible, I would like to keep my trigger so like that hibernate doesn't have to call the database each time I insert and be able to do batch inserts.
Edit: I'll probably need to insert near a million of rows
I use Hibernate JPA in my application. I have a Table that has a Primary key(Sequence). Service inserts records to that table.
Version: Oracle 12c
Dialect: org.hibernate.dialect.Oracle10gDialect
Issue :
We face problem(Unique constraint violation on SEQUENCE Key) during the load testing.
Questions :
This issue is not occurring all the time. But only during load test. Can someone please check and help to use thread safe generator?
Is it DB side sequence definition issue or at Java side?
DB Sequence :
CREATE SEQUENCE MY_SEQ
START WITH 1
INCREMENT BY 1
NOMINVALUE
NOMAXVALUE
CACHE 30
NOORDER;
CREATE TABLE MY_TABLE (
MY_PRIMARY_KEY INT default MY_SEQ.nextval NOT NULL,
VALUE_COL VARCHAR2(10) NULL
);
Entity :
public class MyTableEntity implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#Column(name = "MY_PRIMARY_KEY")
#GenericGenerator(
name = "mySequenceGenerator",
strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator",
parameters = {
#Parameter(name = "sequence_name", value = "SEQUENCE MY_SEQ"),
#Parameter(name = "increment_size", value = "1")
}
)
#GeneratedValue(generator = "mySequenceGenerator")
private long myPrimaryKey;
#Column(name = "VALUE")
private String value;
}
Oracle 10 Dialect
For Oracle10gDialect use this configuration
#Id
#Column(name = "MY_PRIMARY_KEY")
#GeneratedValue(strategy=GenerationType.AUTO)
Long myPrimaryKey;
Hibernate creates a table and a sequence:
create table MY_TABLE (
MY_PRIMARY_KEY number(19,0) not null,
VALUE varchar2(255 char),
primary key (MY_PRIMARY_KEY))
create sequence hibernate_sequence
While storing it first gets the new sequence ID and than passes it in the INSERT statement
select hibernate_sequence.nextval from dual
insert into MY_TABLE (VALUE, MY_PRIMARY_KEY) values (?, ?)
Oracle 12 Dialect
If you use Oracle 12 that natively supports IDENTITY column it is prefered to upgrade to Oracle12cDialect (note that this requires Hibernate 5.3)
Set the strategy to GenerationType.IDENTITY
#Id
#Column(name = "MY_PRIMARY_KEY", updatable = false, nullable = false)
#GeneratedValue(strategy=GenerationType.IDENTITY)
Long myPrimaryKey;
The following table is created - the important part is generated as identity which provides the unique velues.
Note that no explicite sequence is required to be created, it is managed internally .
create table MY_TABLE (
MY_PRIMARY_KEY number(19,0) generated as identity,
VALUE varchar2(255 char),
primary key (MY_PRIMARY_KEY))
While storing no ID is passed in the INSERT, it is assigned by Oracle and returned to the session
insert into MY_TABLE (VALUE) values (?) RETURNING MY_PRIMARY_KEY INTO ?
Note that in contrary to the Oracle 10 you save one round trip to the database.
Change long to Long
Because if you use long, it (by default) is 0. No generator is allowed to change existing values!
https://docs.oracle.com/javaee/7/api/javax/persistence/GeneratedValue.html
I have class like Clazz
#Table(
name="tablename",
uniqueConstraints=
#UniqueConstraint(
name= "uniqueColumn_deleted_uk",
columnNames={"myuniquecolumn", "deleted"}
)
)
public class Clazz {
#Column(name = "deleted")
private LocalDateTime deleted;
}
deleted is nullable, PosgreSQL creates unique index like
CREATE UNIQUE INDEX uniqueColumn_date_uk ON public.tablename (short_code_3, deleted);
and it allows insert duplicate myuniquecolumn when deleted is NULL.
How prevent this?
I want have non duplicates when deleted is null.
You should create two partial unique indexes
create unique index on public.tablename (short_code_3, deleted) where deleted is not null;
create unique index on public.tablename (short_code_3) where deleted is null;
(I don't know how to do it in your ORM).
This is not possible because null is never = null.
Read more about null values in SQL https://en.wikipedia.org/wiki/Null_(SQL)
If you want to have the deleted column in the unique index you must provide a default value for that column.
Tow partial indexes like klin provided are best practice up to Postgres 14.
Postgres 15 adds NULLS NOT DISTINCT for this purpose:
CREATE UNIQUE INDEX foo_idx ON public.tbl (short_code_3, deleted) NULLS NOT DISTINCT;
See:
Create unique constraint with null columns
In my Spring Data/JPA project I use ProstgreSQL database.
I use a following mapping for my JPA entity PK on cards table:
#Id
#SequenceGenerator(name = "cards_id_seq", sequenceName = "cards_id_seq", allocationSize = 1)
#GeneratedValue(strategy = GenerationType.AUTO, generator = "cards_id_seq")
private Long id;
Everything works fine until some other applications or persons do not insert manually new records into this table. Once it happens my application fails with the following error:
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "cards_pkey"
Detail: Key (id)=(42) already exists.
when tries to insert records because of out of sync between PostgreSQL sequence object and actual PK IDs in the database.
What am I doing wrong and how to solve this situation in order to assign the correct IDs to new records inserted via my application ?