Insert content of table variable into table - oracle

I have the following table, two types based on it, and a function that reads from this table:
CREATE TABLE myTable (
ID RAW(16) NULL,
NAME NVARCHAR2(200) NULL,
ENTITYID RAW(16) NOT NULL
);
CREATE TYPE myRowType AS OBJECT (
NAME NVARCHAR2(200),
ENTITYID RAW(16)
);
CREATE TYPE myTableType IS TABLE OF myRowType;
CREATE FUNCTION myFunction(...) RETURN myTableType ...
As you can see, the type myRowType is similar to myTable, but not exactly.
My goal is to insert rows into myTable based on the results of myFunction.
The naive approach would be to just write:
INSERT INTO myTable(ID, NAME, ENTITYID)
SELECT sys_guid(), NAME, ENTITYID
FROM TABLE(myFunction(...));
But since myFunction reads from myTable, this leads to the following error:
ORA-04091: table myTable is mutating, trigger/function may not see it
So I have to split the myFunction call from the insert statement. I tried it like this:
DECLARE
tbl myTableType;
BEGIN
SELECT myRowType(x.NAME, x.ENTITYID)
BULK COLLECT INTO tbl
FROM TABLE(myFunction(...)) x;
INSERT INTO myTable
(ID, NAME, ENTITYID)
SELECT sys_guid(), x.NAME, x.ENTITYID
FROM tbl x;
END;
But here, Oracle doesn't seem to understand the FROM tbl clause. It shows the error
ORA-00942: table or view does not exist
How can I insert the rows in tbl into myTable?

Since you can't use a locally defined nested table as an argument for TABLE function, maybe you would consider using the FORALL bulk insert? I see you are using Oracle 11g, so you will be able to access fields of myRowType. You would then replace your INSERT from your PL/SQL block with this:
FORALL v_i IN tbl.FIRST..tbl.LAST
INSERT INTO myTable VALUES (sys_guid(), tbl(v_i).name, tbl(v_i).entityid);
I recommend this great article by Tim Hall: BULK COLLECT & FORALL

Related

How can I handle uniqueness in this situation?

I have a table like this:
create table my_table
(
type1 varchar2(10 char),
type2 varchar2(10 char)
);
I want to uniqueness like this;
if type1 column has 'GENERIC' value then just type2 column must be unique for the table. for example;
type1 column has 'GENERIC' value and type2 column has 'value_x' then there must not any type2 column value that equals to 'value_x'.
But other uniqueness is looking for both column. I mean it should be unique by type1 and type2 columns.(of course first rule is constant)
I try to make it with trigger;
CREATE OR REPLACE trigger my_trigger
BEFORE INSERT OR UPDATE
ON my_table
FOR EACH ROW
DECLARE
lvn_count NUMBER :=0;
lvn_count2 NUMBER :=0;
errormessage clob;
MUST_ACCUR_ONE EXCEPTION;
-- PRAGMA AUTONOMOUS_TRANSACTION; --without this it gives mutating error but I cant use this because it will conflict on simultaneous connections
BEGIN
IF :NEW.type1 = 'GENERIC' THEN
SELECT count(1) INTO lvn_count FROM my_table
WHERE type2= :NEW.type2;
ELSE
SELECT count(1) INTO lvn_count2 FROM my_table
WHERE type1= :NEW.type1 and type2= :NEW.type2;
END IF;
IF (lvn_count >= 1 or lvn_count2 >= 1) THEN
RAISE MUST_ACCUR_ONE;
END IF;
END;
But it gives mutating error without pragma . I do not want to use it due to conflict on simultaneous connections. (error because I use same table on trigger)
I try to make it with unique index but I cant manage.
CREATE UNIQUE INDEX my_table_unique_ix
ON my_table (case when type1= 'GENERIC' then 'some_logic_here' else type1 end, type2); -- I know it does not make sense but maybe there is something different that I can use in here.
Examples;
**Example 1**
insert into my_table (type1,type2) values ('a','b'); -- its ok no problem
insert into my_table (type1,type2) values ('a','c'); -- its ok no problem
insert into my_table (type1,type2) values ('c','b'); -- its ok no problem
insert into my_table (type1,type2) values ('GENERIC','b'); -- it should be error because b is exist before (i look just second column because first column value is 'GENERIC')
EXAMPLE 2:
insert into my_table (type1,type2) values ('GENERIC','b'); -- its ok no problem
insert into my_table (type1,type2) values ('a','c'); -- its ok no problem
insert into my_table (type1,type2) values ('d','c'); -- its ok no problem
insert into my_table (type1,type2) values ('d','b'); -- it should be error because second column can not be same as the second column value that first column value is 'GENERIC'
What you're trying to do is not really straightforward in Oracle. One possible (although somewhat cumbersome) approach is to use a combination of
an additional materialized view with refresh (on commit)
a windowing function to compute the number of distinct values per group
a windowing function to compute the number of GENERIC rows per group
a check constraint to ensure that either we have only one DISTINCT value or we don't have GENERIC in the same group
This should work:
create materialized view mv_my_table
refresh on commit
as
select
type1,
type2,
count(distinct type1) over (partition by type2) as distinct_type1_cnt,
count(case when type1 = 'GENERIC' then 1 else null end)
over (partition by type2) as generic_cnt
from my_table;
alter table mv_my_table add constraint chk_type1
CHECK (distinct_Type1_cnt = 1 or generic_cnt = 0);
Now, INSERTing a duplicate won't fail immediately, but the subsequent COMMIT will fail because it triggers the materialized view refresh, and that will cause the check constraint to fire.
Disadvantages
duplicate INSERTs won't fail immediately (making debugging more painful)
depending on the size of your table, the MView refresh might slow down COMMITs considerably
Links
For a more detailed discussion of this approach, see AskTom on cross-row constraints
Try it like this:
CREATE TABLE my_table (
type1 VARCHAR2(10 CHAR),
type2 VARCHAR2(10 CHAR),
type1_unique VARCHAR2(10 CHAR) GENERATED ALWAYS AS ( NULLIF(type1, 'GENERIC') ) VIRTUAL
);
ALTER TABLE MY_TABLE ADD (CONSTRAINT my_table_unique_ix UNIQUE (type1_unique, type2) USING INDEX)
Or an index like this should also work:
CREATE UNIQUE INDEX my_table_unique_ix ON MY_TABLE (NULLIF(type1, 'GENERIC'), type2);
Or doing it in your style (you only missed the END):
CREATE UNIQUE INDEX my_table_unique_ix ON my_table (case when type1= 'GENERIC' then null else type1 end, type2);
Unless I'm missing something obvious, the logic in the answer from #Frank Schmitt can also be implemented using a statement level trigger. It is a lot simpler to implement and does not have the disadvantages that Frank mentions.
create or replace TRIGGER my_table_t
AFTER INSERT OR UPDATE OR DELETE
ON my_table
DECLARE
l_dummy NUMBER;
MUST_ACCUR_ONE EXCEPTION;
BEGIN
WITH constraint_violated AS
(
select
type1,
type2,
count(distinct type1) over (partition by type2) as distinct_type1_cnt,
count(case when type1 = 'GENERIC' then 1 else null end)
over (partition by type2) as generic_cnt
from my_table
)
SELECT 1 INTO l_dummy
FROM constraint_violated
WHERE NOT (distinct_type1_cnt = 1 or generic_cnt = 0) FETCH FIRST 1 ROWS ONLY;
RAISE MUST_ACCUR_ONE;
EXCEPTION WHEN NO_DATA_FOUND THEN
NULL;
END;
/

how to use one sql insert data to two table?

I have two table,and they are connected by one field : B_ID of table A & id of table B.
I want to use sql to insert data to this two table.
how to write the insert sql ?
1,id in table B is auto-increment.
2,in a stupid way,I can insert data to table B first,and then select the id from table B,then add the id to table A as message_id.
You cannot insert data to multiple tables in one SQL statement. Just insert data first to B table and then table A. You could use RETURNING statement to get ID value and get rid of additional select statement between inserts.
See: https://oracle-base.com/articles/misc/dml-returning-into-clause
Have you heard about AFTER INSERT trigger? I think it is what you are looking for.
Something like this might do what you want:
CREATE OR REPLACE TRIGGER TableB_after_insert
AFTER INSERT
ON TableB
FOR EACH ROW
DECLARE
v_id int;
BEGIN
/*
* 1. Select your id from TableB
* 2. Insert data to TableA
*/
END;
/

SELECT from a bulk collection

Is it possible to select from a bulk collection?
Something along these lines:
DECLARE
CURSOR customer_cur IS
SELECT CustomerId,
CustomerName
FROM Customers
WHERE CustomerAreaCode = '576';
TYPE customer_table IS TABLE OF customer_cur%ROWTYPE;
my_customers customer_table;
BEGIN
OPEN customer_cur;
FETCH customer_cur
BULK COLLECT INTO my_customers;
-- This is what I would like to do
SELECT CustomerName
FROM my_customers
WHERE CustomerId IN (1, 2, 3);
END;
I don't seem to be able to select from the my_customers table.
Yes, you can. Declare yourself schema-level types as follows:
create or replace rec_customer_cur
as
object (
customerid integer, -- change to the actual type of customers.customerid
customername varchar2(100) -- change to the actual type of customers.customername
);
/
create or replace type customer_table
as
table of rec_customer_cur;
/
Then, in your PLSQL code, you can declare
CURSOR customer_cur IS
SELECT new rec_customer_cur(CustomerId, CustomerName)
FROM Customers
WHERE CustomerAreaCode = '576';
... and then use ...
SELECT CustomerName
INTO whatever
FROM table(my_customers)
WHERE CustomerId IN (1, 2, 3);
This is because schema-level types can be used in SQL context.
If you want to also display the dataset returned by the select, then just use a REF CURSOR as an OUT parameter.
The SELECT ...FROM TABLE is a SQL statement, which needs a STATIC TABLE NAME, as a database object. It throws an error since the collection name is not actually a database table as an object.
To return the dataset, use SYS_REFCURSOR as OUT parameter.
open cur as select....

Oracle: insert from type table

i want to insert from a type of table into a table.
Is there a way to do this with bulk? And can I change the type table content a little?
Just like here, but the other way around:
How to insert data into a PL/SQL table type rather than PL/SQL table?
Assuming that you have something like
CREATE TYPE my_nested_table_type
AS TABLE OF <<something>>;
DECLARE
l_nt my_nested_table_type;
BEGIN
<<something that populates l_nt>>
then the way to do a bulk insert of the data from the collection into a heap-organized table would be to use a FORALL
FORALL i in 1..l_nt.count
INSERT INTO some_table( <<list of columns>> )
VALUES( l_nt(i).col1, l_nt(i).col2, ... , l_nt(i).colN );

Setting the primary key of a object type table in Oracle

I have two Oracle questions.
How can I set the primary key of a table when the table is made up of an object type? e.g.
CREATE TABLE object_names OF object_type
I have created a Varray type,
CREATE TYPE MULTI_TAG AS VARRAY(10) OF VARCHAR(10);
but when I try to do
SELECT p.tags.count FROM pg_photos p;
I get an invalid identifier error on the "count" part. p.tags is a MULTI_TAG, how can I get the number of elements in the MULTI_TAG?
First of all I wouldn't recommend storing data in Object tables. Objects are a great programmatic tool but querying Object tables leads to complicated SQL. I would advise storing your data in a standard relationnal model and using the objects in your procedures.
Now to answer your questions:
A primary key should be immutable, so most of the time an Object type is inappropriate for a primary key. You should define a surrogate key to reference your object.
You will have to convert the varray into a table to be able to query it from SQL
For example:
SQL> CREATE TYPE MULTI_TAG AS VARRAY(10) OF VARCHAR(10);
2 /
Type created
SQL> CREATE TABLE pg_photos (ID number, tags multi_tag);
Table created
SQL> INSERT INTO pg_photos VALUES (1, multi_tag('a','b','c'));
1 row inserted
SQL> INSERT INTO pg_photos VALUES (2, multi_tag('e','f','g'));
1 row inserted
SQL> SELECT p.id, COUNT(*)
2 FROM pg_photos p
3 CROSS JOIN TABLE(p.tags)
4 GROUP BY p.id;
ID COUNT(*)
---------- ----------
1 3
2 3
1)
A primary key is a constraint, to add constrains on object tables check this link:
http://download-west.oracle.com/docs/cd/B28359_01/appdev.111/b28371/adobjdes.htm#i452285
2)
The COUNT method can't be used in a SQL statement:
REF LINK IN COMMENTS
So in my case I had to do
SELECT p.pid AS pid, count(*) AS num_tags FROM pg_photos p, TABLE(p.tags) t2 GROUP BY p.pid;

Resources