How can i create double sided composite primary key in oracle? - oracle

I have a table and this table has relationship with itself as many-to-many. So i am create another table (second table) that stores two id for composite primary key which comes from original (first) table. But if in second table there is id1=1 and id2=2, then second table shouldn't have id1=2 and id2=1.
So how can i do that, should i write a trigger for that or is there a simple way for oracle.
I use Oracle11g and pl/sql developer.

You can try defining a unique function-based index that automatically defines the index in numerical order. This would ensure that only one of the 2 combinations is ever allowed. Something like:
create unique index your_index on your_table(
least(id1, id2),
greatest(id1, id2)
);
If it matters, there is a slight difference between this approach and MT0's answer that uses the check constraint.
With the check constraint approach, only (id1=1, id2=2) is valid.
With the function-based index approach, both (id1=1, id2=2) and (id1=2, id2=1) are valid, but they can't both be present in the table at any given time.

CREATE TABLE table_name (
id INT PRIMARY KEY
);
CREATE TABLE table_name_many_to_many(
id1 INT REFERENCES table_name( id ),
id2 INT REFERENCES table_name( id ),
CONSTRAINT tnmtm__id1__id2__pk PRIMARY KEY ( id1, id2 ),
CONSTRAINT tnmtm__id1__id2__chk CHECK ( id1 <= id2 )
);

Related

How to implement bidirectional relationship in Spring Spanner?

In code, I tried with #Interleaved in 1-many relationship at non-owning side to get child list. Could anyone help with below questions:
How to implement bidirectional relationship e.g. get parent from child for 1-1, 1-many relationship
Regarding many-many relationship, what are best practices to implement it and how to implement bidirectional relationship for it.
Thank you very much.
Cloud Spanner currently doesn't offer a way to enforce foreign-key constraints between non-interleaved tables. You will have to enforce such constraints in your application logic. You could use DML statements in Cloud Spanner(that come with the ability to read-your-writes in a Cloud Spanner transaction) to enforce these constraints at insert time by inserting into your tables as follows:
INSERT INTO Referenced(key1,value1) VALUES ('Referenced','Value1');
INSERT INTO Referencing(key2, value2, key1)
SELECT 'Referencing', 'Value2', key1 FROM Referenced WHERE
key1 = 'Referenced';
Running the two statements in a read-write transaction will ensure that the PK-FK relationship between the Referenced and Referencing table is always maintained at insert time. You may have to similarly modify update requests/SQL update statements in your application logic to enforce the PK-FK constraint for updates.
For a 1-many relationship, when using interleaved tables, then the child row's primary key already contains the primary key of its parent, so it is trivial to get the parent row.
CREATE TABLE parent (
parent_key INT64 NOT NULL,
...
) PRIMARY KEY (parent_key);
CREATE TABLE child (
parent_key INT64 NOT NULL,
child_key INT64 NOT NULL,
...
) PRIMARY KEY (parent_key, child_key),
INTERLEAVE IN PARENT parent ON DELETE CASCADE;
If for some reason you do not have the key of the parent, and only the key of the child, then for efficiency you would need to create an index for the reverse lookup:
CREATE INDEX child_to_parent_index
ON child (
child_key
);
and force use of that index when performing the query for the parent:
SELECT
p.*
FROM
parent as p
JOIN
child#{FORCE_INDEX=child_by_id_index} AS c ON p.parent_key = c.parent_key
WHERE
c.child_key = #CHILD_KEY_VALUE;
Many-many relationships would have to be implemented using a 'mapping' table linking table1-key to table2-key.
You will also need a top-level index to get efficient reverse-lookups, and use the FORCE_INDEX directive as above in your queries.
And as #adi mentioned, foreign key constraints would have to be enforced by the application.
CREATE TABLE table1 (
table1_key INT64 NOT NULL,
...
) PRIMARY KEY (table1_key);
CREATE TABLE table2 (
table2_key INT64 NOT NULL,
...
) PRIMARY KEY (table2_key);
CREATE TABLE table1_table2_map (
table1_key INT64 NOT NULL,
table2_key INT64 NOT NULL,
) PRIMARY KEY (table1_key, table2_key);
CREATE INDEX table2_table1_map_index
ON table1_table2_map (
table2_key
) STORING (
table1_key
);
Your application would be responsible for keeping the referential integrity of the mapping table - deleting the mapping rows when rows in table1 or table2 are deleted
If you want to use interleaved tables, then if your application needs to perform bi-directional lookups, you may have to create 2 mapping tables - as a child of each parent, so that finding the mappings from both directions are equally efficient.
CREATE TABLE table1 (
table1_key INT64 NOT NULL,
...
) PRIMARY KEY (table1_key);
CREATE TABLE table2 (
table2_key INT64 NOT NULL,
...
) PRIMARY KEY (table2_key);
CREATE TABLE table1_table2_map (
table1_key INT64 NOT NULL,
table2_key INT64 NOT NULL,
) PRIMARY KEY (table1_key, table2_key),
INTERLEAVE IN PARENT table1 ON DELETE CASCADE;
CREATE TABLE table2_table1_map (
table2_key INT64 NOT NULL,
table1_key INT64 NOT NULL,
) PRIMARY KEY (table2_key, table1_key),
INTERLEAVE IN PARENT table2 ON DELETE CASCADE;
Note that the application needs to keep both of these mapping tables up to date -- ie when deleting a row from table1, the application has to get the referenced table2_key values and delete the mappings from the table2_table1_map (and vice versa).

How to add a composite foreign key in liquibase?

I've been struggling for some time now to figure out a way to create a composite foreign key in liquibase.
I have a table A which has a composite PK, let's say (id1, id2). I'm trying to make another table B, in which the A.PK is mapped as a FK.
I'm using liquibase with YAML and something doesn't seem to add up.
I've tried adding the FK when creating the table (so in the column tag)
- column:
name: id1_id2
type: int
constraints:
nullable: false
foreignKeyName: fk_id1_id2
references: A(id1, id2)
Unfortunately this syntax returns an error:
Caused by: java.sql.SQLSyntaxErrorException: ORA-02256: number of referencing columns must match referenced columns
Another thing that I've tried is creating the table first, with the column for the desired FK and try to add a FK constraint on that column. This doesn't throw any error but it does nothing (also the log for LB says "empty" in the description)
changes:
- addForeignKeyContraint:
baseColumnNames: id1, id2
baseTableName: B
constraintName: fb_id1_id2
referencedColumnNames: id1, id2
referencedTableName: A
Any help would be much appreciated.
Thanks
Have you tried addForeignKeyConstraint? Something like this:
- changeSet:
id: 1
author: you
changes:
- addForeignKeyConstraint:
baseColumnNames: id1, id2
baseTableName: tableB
constraintName: FK_tableB_tableA
referencedColumnNames: id1, id2
referencedTableName: tableA
I don't use Liquibase, but here's how it is supposed to look like, as far as Oracle is concerned: if you want to create a composite foreign key (in the detail table), then it has to reference a composite primary key (in the master table).
Have a look at this example:
SQL> create table master
2 (id_1 number,
3 id_2 number,
4 constraint pk_mas primary key (id_1, id_2));
Table created.
SQL> create table detail
2 (id_det number constraint pk_det primary key,
3 --
4 id_1 number,
5 id_2 number,
6 constraint fk_det_mas foreign key (id_1, id_2) references master (id_1, id_2));
Table created.
SQL>
It just wouldn't work otherwise; that's why you got the error
ORA-02256: number of referencing columns must match referenced columns
because your detail table contained a single column (id1_id2) and tried to reference two columns in table A (id1, id2).

Partitioning a table refenceing other

I have a target table T1 which doesn't have a Date field. The current size is increasing rapidly. Hence I need to add a Date field and also perform table partitioning on this target table.
T1 has PRIMARY KEY (DOCID, LABID)
and has CONSTRAINT FOREIGN KEY (DOCID) REFERENCES T2
Table T2 is also complex table and has many rules in it.
T2 has PRIMARY KEY (DOCID)
My question is, as I need to partition T1. Is it possible NOT TO perform any step for T2 before T1 is partition? DBA told me that I need to partition T2 first before I touch T1??
There is no need to partition T2 before partitioning T1. Foreign key constraints do not care in the slightest bit about partitioning.
Best of luck.
You have – as proposed by others - two options. The first one is to add a redundant column DATE to the table T1 (child) and introduce the range partitioning on this column.
The second option is to use reference partitioning. Below is the simplified DDL for those options.
Range partitioning on child
create table T2_P2 /* parent */
(docid number not null,
trans_date date not null,
pad varchar2(100),
CONSTRAINT t2_p2_pk PRIMARY KEY(docid)
);
create table T1_P2 /* child */
(docid number not null,
labid number not null,
trans_date date not null, /** redundant column **/
pad varchar2(100),
CONSTRAINT t1_p2_pk PRIMARY KEY(docid, labid),
CONSTRAINT t1_p2_fk
FOREIGN KEY(docid) REFERENCES T2_P2(docid)
)
PARTITION BY RANGE(trans_date)
( PARTITION Q1_2016 VALUES LESS THAN (TO_DATE('01-APR-2016','DD-MON-YYYY'))
);
Reference partition
create table T2_RP /* parent */
(docid number not null,
trans_date date not null,
pad varchar2(100),
CONSTRAINT t2_rp_pk PRIMARY KEY(docid)
)
PARTITION BY RANGE(trans_date)
( PARTITION Q1_2016 VALUES LESS THAN (TO_DATE('01-APR-2016','DD-MON-YYYY'))
);
create table T1_RP /* child */
(docid number not null,
labid number not null,
pad varchar2(100),
CONSTRAINT t1_rp_pk PRIMARY KEY(docid, labid),
CONSTRAINT t1_rp_fk
FOREIGN KEY(docid) REFERENCES T2_RP(docid)
)
PARTITION BY REFERENCE(t1_rp_fk);
Your question is basically if the first option is possible, so the answer is YES.
To decide if the first option is preferable I’d suggest checking three criteria:
Migration
The first option requires a new DATE column in the child table that must be initialized during the migration (and of course correct maintained by the application).
Lifecycle
It could be that the lifecycle of both tables is the same (e.g. both parent and child records are kept for 7 year). In this case is preferable if both tables are partitioned (on the same key).
Access
For queries such as below you will profit from the reference partitioning (both tables are pruned - i.e. only the partitions with the accessed date are queried).
select * from T2_RP T2 join T1_RP T1 on t2.docid = t1.docid
where t2.trans_date = to_date('01012016','ddmmyyyy');
In the first option you will (probalbly) end with FTS on T2 and to get pruning on T1 you need to add predicate T2.trans_date = t1.trans_date
Having said that, I do not claim that the reference partition is superior. But I think it's worth to examine both options in your context and see which one is better.

Unique case insensitive constraint in oracle database

I have a varchar column in my table for url value. I have to make it unique across the records case-insensitively.
I found 2 ways to achieve it.
Create an unique index on the field.
create unique index <index_name> on <tablename>(lower(<column_name>))
Add a unique constraint on the field as
ALTER TABLE person ADD CONSTRAINT person_name_unique
UNIQUE(LOWER(first_name),LOWER(last_name));
What is the efficient way to adopt from the above choices ?
The more efficient approach is the first approach. It's more efficient, though, only because the latter syntax doesn't work. You cannot, unfortunately, create a function-based constraint in the same way that you can create a unique index.
A unique constraint doesn't work
SQL> create table person (
2 first_name varchar2(10),
3 last_name varchar2(10)
4 );
Table created.
SQL> ALTER TABLE person ADD CONSTRAINT person_name_unique
2 UNIQUE(LOWER(first_name),LOWER(last_name));
UNIQUE(LOWER(first_name),LOWER(last_name))
*
ERROR at line 2:
ORA-00904: : invalid identifier
A unique function-based index, however, does work
SQL> create unique index idx_uniq_name
2 on person( lower(first_name), lower(last_name) );
Index created.
1 is possible and gives error for duplicate.
2 is not possible. (function is not possible in constraint)

Delete Cascade with Script

I have 3 tables,which are not created with the ON DELETE CASCADE option, nor is it an option to create them as such.
I may need to delete from all three tables in succession. Is there a way to do this using only the promotion_id as a key? Because I need to delete in reverse order, the promotion_id is gone by the time I get to the dependent tables.
I am thinking that the only way to do this is to SELECT the keys of the 3 tables using a JOIN, and then use them individually. But it would be nice if there was a pure SQL solution to it.
I am using JDBC, Spring, and Oracle. Thanks.
create table test_rates (
rate_id varchar2(10) primary key,
rate number
);
create table test_offers (
offer_id varchar2(10) primary key ,
rate_id varchar2(10),
foreign key (rate_id) references test_rates (rate_id)
);
create table test_promotions (
promotion_id varchar2(10) primary key ,
offer_id varchar2(10),
foreign key (offer_id) references test_offers (offer_id)
);
insert into test_rates (rate_id,rate) values (1,199);
insert into test_offers (offer_id,rate_id) values (11,1);
insert into test_promotions (promotion_id,offer_id) values (21,11);
commit;
delete from test_promotions where promotion_id = 21;
delete from test_offers where offer_id in (select offer_id from test_promotions where promotion_id = 1); -- key is gone by now
In the common case, when there are N promotions (N>1) for a single offer, it wouldn't make sense to delete the offer if one promotion only is deleted. You would end up with orphaned promotions.
If you want to delete a rate, it would make sense to start with deleting all children promotions then all children offers then delete the rate. In that case though, the rate_id could be used along the way.
If you delete a child record, there's no need to delete the parent record unless of course this is a requirement, in which case start with looking for the parent id and see above.

Resources