Creating multi-column constraints depending on column value - oracle

I'm trying to create a constraint on an oracle database that says the following:
If column1 == someValue then the combination of column2 and column3 has to be unique for all entries with column1 == someValue
I'm familiar with the concepts of unique and check constraints, and I've tried expressing the constraint with those constructs. However, I can't seem to find a way to include the condition. Which is why I'm wondering if it is even possible.
The table I want to create constraint for is created by Hibernate mapping the following class hierarchy (most of the attributes ommited for brevity):
class MyClass {
String name;
MyClass parent;
}
class MySubClass extends MyClass {
String businessValue;
}
The classes are mapped using a single table strategy and using different discriminator values for each type. It's a customer requirement that for all instances of MySubClass the combination of name and parent has to be unique (column1 would be the discriminator value). It would be easy to enforce such a constraint on the parent class through a table constraint. However, that constraint must only apply to MySubClass.
There is the possibility of validating the data before entering it into the the database with frameworks such as Hibernate Validator. But since the validation would need database access anyway, a database constraint seems the more performance saving way to do it.

You can't do this with a constraint, but you can do it using a "function-based index" (FBI) like this:
create unique index mytable_idx on mytable
( case when column1 = 'somevalue' then column2 end
, case when column1 = 'somevalue' then column3 end
);
This only creates unique index entries for rows with column1 = 'somevalue', so other rows can contain duplicates but these cannot.

Related

find a best way to traverse oracle table

I have an oracle table. Table's DDL is (not have the primary key)
create table CLIENT_ACCOUNT
(
CLIENT_ID VARCHAR2(18) default ' ' not null,
ACCOUNT_ID VARCHAR2(18) default ' ' not null,
......
)
create unique index UK_ACCOUNT
on CLIENT_ACCOUNT (CLIENT_ID, ACCOUNT_ID)
Then, the data's scale is very huge, maybe 100M records. I want to traverse this whole table's data with batch.
Now, I use the table's index to batch traverse. But I have some oracle grammar problems.
# I want to use this SQL, but grammar error.
# try to use b-tree's index to locate start position, but not work
select * from CLIENT_ACCOUNT
WHERE (CLIENT_ID, ACCOUNT_ID) > (1,2)
AND ROWNUM < 1000
ORDER BY CLIENT_ID, ACCOUNT_ID
Has the fastest way to batch touch table data?
Wild guess:
select * from CLIENT_ACCOUNT
WHERE CLIENT_ID > '1'
and ACCOUNT_ID > '2'
AND ROWNUM < 1000;
It would at least compile, although whether it correctly implements your business logic is a different matter. Note that I have cast your filter criteria to strings. This is because your columns have a string datatype and you are defaulting them to spaces, so there's a high probability those columns contain non-numeric values.
If this doesn't solve your problem, please edit your question with more details; sample input data and expected output is always helpful in these situations.
Your data model seems odd.
Your columns are defined as varchar2. So why is your criteria numeric?
Also, why do you default the key columns to space? It would be better to leave unpopulated values as null. (To be clear, NULL is not a good thing in an indexed column, it's just better than a space.)

Creating a linq query which results in only 1 query execution

I have the following two tables:
*tableA
*tableB
TableA has a 1 to many relationship with tableB and therefore TableB has a column with a foreign key for tableA
TableA will be used to create a c# class called classA which has a bool property indicating wether it has related records in TableB called HasRecords
I could create it like this:
TableA.Select(s => new ClassA {
Hasrecords = s.TableB.Any()
}
Will this create 1 query or multiple ones because of the any function? Is there another way to do this in line without triggering multiple query executions or do I have to resort to a table valued function?

Unique constraint with null columns (Hibernate, PostgreSQL)

I have class like Clazz
#Table(
name="tablename",
uniqueConstraints=
#UniqueConstraint(
name= "uniqueColumn_deleted_uk",
columnNames={"myuniquecolumn", "deleted"}
)
)
public class Clazz {
#Column(name = "deleted")
private LocalDateTime deleted;
}
deleted is nullable, PosgreSQL creates unique index like
CREATE UNIQUE INDEX uniqueColumn_date_uk ON public.tablename (short_code_3, deleted);
and it allows insert duplicate myuniquecolumn when deleted is NULL.
How prevent this?
I want have non duplicates when deleted is null.
You should create two partial unique indexes
create unique index on public.tablename (short_code_3, deleted) where deleted is not null;
create unique index on public.tablename (short_code_3) where deleted is null;
(I don't know how to do it in your ORM).
This is not possible because null is never = null.
Read more about null values in SQL https://en.wikipedia.org/wiki/Null_(SQL)
If you want to have the deleted column in the unique index you must provide a default value for that column.
Tow partial indexes like klin provided are best practice up to Postgres 14.
Postgres 15 adds NULLS NOT DISTINCT for this purpose:
CREATE UNIQUE INDEX foo_idx ON public.tbl (short_code_3, deleted) NULLS NOT DISTINCT;
See:
Create unique constraint with null columns

Why does SQLite not use an index for queries on my many-to-many relation table?

It's been a while since I've written code, and I never used SQLite before, but many-to-many relationships used to be so fundamental, there must be a way to make them fast...
This is a abstracted version of my database:
CREATE TABLE a (_id INTEGER PRIMARY KEY, a1 TEXT NOT NULL);
CREATE TABLE b (_id INTEGER PRIMARY KEY, fk INTEGER NOT NULL REFERENCES a(_id));
CREATE TABLE d (_id INTEGER PRIMARY KEY, d1 TEXT NOT NULL);
CREATE TABLE c (_id INTEGER PRIMARY KEY, fk INTEGER NOT NULL REFERENCES d(_id));
CREATE TABLE b2c (fk_b NOT NULL REFERENCES b(_id), fk_c NOT NULL REFERENCES c(_id), CONSTRAINT PK_b2c_desc PRIMARY KEY (fk_b, fk_c DESC), CONSTRAINT PK_b2c_asc UNIQUE (fk_b, fk_c ASC));
CREATE INDEX a_a1 on a(a1);
CREATE INDEX a_id_and_a1 on a(_id, a1);
CREATE INDEX b_fk on b(fk);
CREATE INDEX b_id_and_fk on b(_id, fk);
CREATE INDEX c_id_and_fk on c(_id, fk);
CREATE INDEX c_fk on c(fk);
CREATE INDEX d_id_and_d1 on d(_id, d1);
CREATE INDEX d_d1 on d(d1);
I have put in any index i could think of, just to make sure (and more than is reasonable, but not a problem, since the data is read only). And yet on this query
SELECT count(*)
FROM a, b, b2c, c, d
WHERE a.a1 = "A"
AND a._id = b.fk
AND b._id = b2c.fk_b
AND c._id = b2c.fk_c
AND d._id = c.fk
AND d.d1 ="D";
the relation table b2c does not use any indexes:
0|0|2|SCAN TABLE b2c
0|1|1|SEARCH TABLE b USING INTEGER PRIMARY KEY (rowid=?)
0|2|0|SEARCH TABLE a USING INTEGER PRIMARY KEY (rowid=?)
0|3|3|SEARCH TABLE c USING INTEGER PRIMARY KEY (rowid=?)
0|4|4|SEARCH TABLE d USING INTEGER PRIMARY KEY (rowid=?)
The query is about two orders of magnitude to slow to be usable. Is there any way to make SQLite use an index on b2c?
Thanks!
In a nested loop join, the outermost table does not use an index for the join (because the database just goes through all rows anyway).
To be able to use an index for a join, the index and the other column must have the same affinity, which usually means that both columns must have the same type.
Change the types of the b2c columns to INTEGER.
If the lookups on a1 or d1 are very selective, using a or d as the outermost table might make sense, and would then allow to use an index for the filter.
Try running ANALYZE.
If that does not help, you can force the join order with CROSS JOIN or INDEXED BY.

Get primary key column of views

Is there way to retrieve view list along with primary key column name if that view is created with primary key column of dependent table?
E.g.:
Employee(ID PRIMARY KEY, FIRST NAME, LAST NAME, SALARY, DEPARTMENT)
The view derived from Employee table:
EMPLOYEEVIEW(ID, FIRST NAME, LAST NAME)
EMPLOYEEVIEW satisfies my constraint. I need to get these kind of views.
The desired result is something like EMPLOYEEVIEW ID.
To fetch the primary key constraints of the tables in the current schema, you can use this query:
select *
from user_constraints
where constraint_type = 'P'
so to search your view for primary keys I'd use a query like this
select *
from user_views v
join user_constraints c on upper(v.text) like '%'||c.table_name||'%'
where c.constraint_type = 'P'
and v.view_name = 'YOUR_VIEW_NAME'
Unfortunately the text field in the user_views view is of the horrible datatype LONG, so you will need to create your own function (or google one) to convert the LONG to VARCHAR, so you can use upper() and like on it.

Resources