I have 3 tables, TBL_A, TBL_B and TBL_C.
TBL_A has an ID (lets call it ID_A) thats PK of A, and FK of B and C. ID_A have a trigger with an autoincrement before insert.
The problem:
I need to make an Insert All that looks kinda like this
INSERT ALL
INTO TBL_A(FIELD1, FIELD2) VALUES('VALUE1', 'VALUE2')--like i said before, the trigger autoinsert the id with the NEXTVAL in MY_SEQ sequence.
INTO TBL_B(ID_A, FIELD) VALUES (MY_SEQ.CURRVAL, 'VALUE')
INTO TBL_C(ID_A, FIELD) VALUE (MY_SEQ.CURRVAL, 'VALUE')
SELECT * FROM DUAL
But, for some reason it says that sequence MY_SEQ.CURRVAL is not yet defined in this session . What can i do to solve it? Cant find a way to do it.
I need to keep the trigger because that insert can be huge.
PLZ help me and srry about my english btw ^^
This have worked fine for me:
INSERT ALL INTO TBL_A(FIELD1, FIELD2) VALUES('VALUE1', 'VALUE2')
INTO TBL_B(ID_A, FIELD) VALUES (MY_SEQ.CURRVAL, 'VALUE')
INTO TBL_C(ID_A, FIELD) VALUES (MY_SEQ.CURRVAL, 'VALUE')
SELECT * FROM DUAL;
Indeed this really works when I call at least once MY_SEQ.NEXTVAL. In all other runs this is working fine without call NEXTVAL. I think the sequence must be properly initialized
INSERT ALL
INTO TBL_A(ID, FIELD1, FIELD2) VALUES(id, val1, val2)
INTO TBL_B(ID_A, FIELD) VALUES(id, val)
INTO TBL_C(ID_A, FIELD) VALUES(id, val)
SELECT
MY_SEQ.NEXTVAL as id, 'VALUE' as val, 'VALUE1' val1, 'VALUE2' val2
FROM DUAL
You can't use a sequence in the subquery of a multi-table insert and you shouldn't use it in the values clause like that because it can give unpredictable results.
You might consider populating the data into a global temporary table with the sequence and then doing your multitable insert using the global temporary table. Of course this would require modifying the trigger.
Related
I'm facing a problem when I try to use LAG function on CLOB column.
So let's assume we have a table
create table test (
id number primary key,
not_clob varchar2(255),
this_is_clob clob
);
insert into test values (1, 'test1', to_clob('clob1'));
insert into test values (2, 'test2', to_clob('clob2'));
DECLARE
x CLOB := 'C';
BEGIN
FOR i in 1..32767
LOOP
x := x||'C';
END LOOP;
INSERT INTO test(id,not_clob,this_is_clob) values(3,'test3',x);
END;
/
commit;
Now let's do a select using non-clob columns
select id, lag(not_clob) over (order by id) from test;
It works fine as expected, but when I try the same with clob column
select id, lag(this_is_clob) over (order by id) from test;
I get
ORA-00932: inconsistent datatypes: expected - got CLOB
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 1 Column: 16
Can you tell me what's the solution of this problem as I couldn't find anything on that.
The documentation says the argument for any analytic function can be any datatype but it seems unrestricted CLOB is not supported.
However, there is a workaround:
select id, lag(dbms_lob.substr(this_is_clob, 4000, 1)) over (order by id)
from test;
This is not the whole CLOB but 4k should be good enough in many cases.
I'm still wondering what is the proper way to overcome the problem
Is upgrading to 12c an option? The problem is nothing to do with CLOB as such, it's the fact that Oracle has a hard limit for strings in SQL of 4000 characters. In 12c we have the option to use extended data types (providing we can persuade our DBAs to turn it on!). Find out more.
Some of the features may not work properly in SQL when using CLOBs(like DISTINCT , ORDER BY GROUP BY etc. Looks like LAG is also one of them but, I couldn't find anywhere in docs.
If your values in the CLOB columns are always less than 4000 characters, you may use TO_CHAR
select id, lag( TO_CHAR(this_is_clob)) over (order by id) from test;
OR
convert it into an equivalent SELF JOIN ( may not be as efficient as LAG )
SELECT a.id,
b.this_is_clob AS lagging
FROM test a
LEFT JOIN test b ON b.id < a.id;
Demo
I know this is an old question, but I think I found an answer which eliminates the need to restrict the CLOB length and wanted to share it. Utilizing CTE and recursive subqueries, we can replicate the lag functionality with CLOB columns.
First, let's take a look at my "original" query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
)
SELECT tt.order_by_col,
tt.clob_col,
LAG(tt.clob_col) OVER (ORDER BY tt.order_by_col)
FROM test_table tt;
As expected, I get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
Now, lets look at the modified query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
),
initial_pull AS
(
SELECT tt.order_by_col,
LAG(tt.order_by_col) OVER (ORDER BY tt.order_by_col) AS PREV_ROW,
tt.clob_col
FROM test_table tt
),
recursive_subquery (order_by_col, prev_row, clob_col, prev_clob_col) AS
(
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, NULL
FROM initial_pull ip
WHERE ip.prev_row IS NULL
UNION ALL
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, rs.clob_col
FROM initial_pull ip
INNER JOIN recursive_subquery rs ON ip.prev_row = rs.order_by_col
)
SELECT rs.order_by_col, rs.clob_col, rs.prev_clob_col
FROM recursive_subquery rs;
So here is how it works.
I create the TEST_TABLE, this really is only for the example as you should already have this table somewhere in your schema.
I create a CTE of the data I want to pull, plus a LAG function on the primary key (or a unique column) in the table partitioned and ordered in the same way I would have in my original query.
Create a recursive subquery using the initial row as the root and descending row by row joining on the lagged column. Returning both the CLOB column from the current row and the CLOB column from its parent row.
I'm using Oracle 11GR2 and I have a problem when I try to insert a lot of rows into a table.
Here is my table :
CREATE TABLE ref_bic (
id_bic NUMBER(9),
country_id NUMBER(9) NOT NULL,
bank_code VARCHAR(5),
bic_code VARCHAR(20),
bank_name VARCHAR(150) NOT NULL,
CONSTRAINT pk_ref_bic PRIMARY KEY (id_bic)
);
And here is an example of what I'm inserting :
INSERT INTO REF_BIC (country_id, bank_code, bic_code, bank_name) VALUES (123, '123456', '12345', 'SOME BANK NAME');
Note that the id_bic is self generated.
Now here is my problem, I have more than 30k rows like this one to insert in my database the first time I'm creating it, and every-time it takes me more than 30 minutes to insert all the datas.
I have heard that I could use PARALLEL and APPEND to insert is faster, and that the only requirement is to use
ALTER SESSION FORCE PARALLEL DML;
I tried and it don't seem to work
INSERT /*+ APPEND PARALLEL(REF_BIC) */ INTO REF_BIC (country_id, bank_code, bic_code, bank_name) VALUES (123, '123456', '12345', 'SOME BANK NAME');
It is not interpreted, even if I take off the /* */, it's only making me an error.
Now, it seems like the parallel insert need to be from a subquery like
INSERT /*+ APPEND PARALLEL(REF_BIC) */ INTO REF_BIC SELECT * FROM SOME_TABLE;
But i can't use a subquery since I am creating my database for the first time, it is totally empty.
So here are my questions :
Does parallel insert work without subquery ?
And if not, how can I insert quicker my < 30k rows ?
I have more than 30k rows like this one to insert in my database
Use a text editor/excel to construct and put all of them in a single query joined by UNION ALL , I bet it will be much faster ( with or without parallel hint )
insert into REF_BIC (country_id, bank_code, bic_code, bank_name)
select 123, '123456', '12345', 'SOME BANK NAME' from dual union all
select 124, '123457', '12348', 'SECOND BANK NAME' from dual union all
..
..
First of all, use like types, remove the single quotes around the bank_code and bic_code values.
Just turn you insert statements into an anonymous block. Before the first put the line
BEGIN
After the last put the lines
commit;
end;
I had a similar situation with about 12K insert records, after sandwiching it within an anonymous block it finished within seconds.
I have an insert statement that takes all its values from local variables or literal constants:
INSERT INTO SampleTestLimits
(AuditNumber
,LimitNumber
,ComponentRow
,ComponentColumn
--etc
)
SELECT 1
,varLimitNumber
,varComponentRow
,varComponentColumn
--etc
;
but the problem is that I get an error "Missing FROM".
I guessed that this is because there is no table associated with the Select, and I tried ending the query with
FROM DUAL;
but that doesn't work (possibly because DUAL is a single row, single column pseudo-table, or so I understand).
I can do this quite easily in Sql Server, but how can I do what I want to achieve in Oracle?
TIA.
If you want to insert data from local variables you should use the VALUES clause:
INSERT INTO SampleTestLimits
(AuditNumber
,LimitNumber
,ComponentRow
,ComponentColumn
--etc
)
VALUES
(1
,varLimitNumber
,varComponentRow
,varComponentColumn
--etc
);
This is so simple it has probably already been asked, but I couldn't find it (if that's the case I'm sorry for asking).
I would like to insert an empty row on a table so I can pick up its ID (primary key, generated by an insert trigger) through an ExecuteScalar. Data is added to it at a later time in my code.
My question is this: is there a specific insert syntax to create an empty record? or must I go with the regular insert syntax such as "INSERT INTO table (list all the columns) values (null for every column)"?
Thanks for the answer.
UPDATE: In Oracle, ExecuteScalar on INSERT only returns 0. The final answer is a combination of what was posted below. First you need to declare a parameter, and pick up it up with RETURNING.
INSERT INTO TABLENAME (ID) VALUES (DEFAULT) RETURNING ID INTO :parameterName
Check this out link for more info.
You would not have to specify every single column, but you may not be able to create an "empty" record. Check for NOT NULL constraints on the table. If none (not including the Primary Key constraint), then you would only need to supply one column. Like this:
insert into my_table ( some_column )
values ( null );
Do you know about the RETURNING clause? You can return that PK back to your calling application when you do the INSERT.
insert into my_table ( some_column )
values ( 'blah' )
returning my_table_id into <your_variable>;
I would question the approach though. Why create an empty row? That would/could mean there are no constraints on that table, a bad thing if you want good, clean, data.
Basically, in order to insert a row where values for all columns are NULL except primary
key column's value you could execute a simple insert statement:
insert into your_table(PK_col_name)
values(1); -- 1 for instance or null
The before insert trigger, which is responsible for populating primary key column will
override the value in the values clause of the insert statement leaving you with an
empty record except PK value.
In Oracle, given a simple data table:
create table data (
id VARCHAR2(255),
key VARCHAR2(255),
value VARCHAR2(511));
suppose I want to "insert or update" a value. I have something like:
merge into data using dual on
(id='someid' and key='testKey')
when matched then
update set value = 'someValue'
when not matched then
insert (id, key, value) values ('someid', 'testKey', 'someValue');
Is there a better way than this? This command seems to have the following drawbacks:
Every literal needs to be typed twice (or added twice via parameter setting)
The "using dual" syntax seems hacky
If this is the best way, is there any way around having to set each parameter twice in JDBC?
I don't consider using dual to be a hack. To get rid of binding/typing twice, I would do something like:
merge into data
using (
select
'someid' id,
'testKey' key,
'someValue' value
from
dual
) val on (
data.id=val.id
and data.key=val.key
)
when matched then
update set data.value = val.value
when not matched then
insert (id, key, value) values (val.id, val.key, val.value);
I would hide the MERGE inside a PL/SQL API and then call that via JDBC:
data_pkg.merge_data ('someid', 'testKey', 'someValue');
As an alternative to MERGE, the API could do:
begin
insert into data (...) values (...);
exception
when dup_val_on_index then
update data
set ...
where ...;
end;
I prefer to try the update before the insert to save having to check for an exception.
update data set ...=... where ...=...;
if sql%notfound then
insert into data (...) values (...);
end if;
Even now we have the merge statement, I still tend to do single-row updates this way - just seems more a more natural syntax. Of course, merge really comes into its own when dealing with larger data sets.