Related
In Snowflake I am trying to insert updated records to a table. Then I want to identify the records that were just inserted as the most recent records save that as the final table output in a new column called ACTIVE which will either be true or flase. I am having an issue incorporating some sort of updated table segment to my current query. I need everything be contained in the same query rather than break it up into separate parts.
I have my table as follows
CREATE TABLE IF NOT EXISTS MY_TABLE
(
LINK_ID BINARY NOT NULL,
LOAD TIMESTAMP NOT NULL,
SOURCE STRING NOT NULL,
SOURCE_DATE TIMESTAMP NOT NULL,
ORDER BIGINT NOT NULL,
ID BINARY NOT NULL,
ATTRIBUTE_ID BINARY NOT NULL
);
I have records being inserted in this way:
INSERT ALL
WHEN HAS_DATA AND ID_SEQ_NUM > 1 AND (SELECT COUNT(1) FROM MY_TABLE WHERE ID = KEY) = 0 THEN
INTO MY_TABLE VALUES (
LINK_KEY,
TIME,
DATASET_NAME,
DATASET_DATE,
ORDER_NUMBER,
O_KEY,
OA_KEY
)
SELECT *
FROM TEST_TABLE;
I would like my final table from this to be the output as
SELECT *, ORDER != MAX(ORDER) OVER (PARTITION BY ID) AS ACTIVE
FROM MY_TABLE;
This is so I can identify the most recent record per ID group as ACTIVE/TRUE and the previous records within that ID group as INACTIVE/FALSE
I tried to use an insert overwrite method like this
INSERT ALL
WHEN HAS_DATA AND ID_SEQ_NUM > 1 AND (SELECT COUNT(1) FROM MY_TABLE WHERE ID = KEY) = 0 THEN
INTO MY_TABLE VALUES (
LINK_KEY,
TIME,
DATASET_NAME,
DATASET_DATE,
ORDER_NUMBER,
O_KEY,
OA_KEY
)
INSERT OVERWRITE INTO MY_TABLE
SELECT *, RSRC_OFFSET != MAX(RSRC_OFFSET) OVER (PARTITION BY ID) AS ACTIVE
FROM L_OPTION_OPTION_ALLOCATION_TEST
SELECT *
FROM MY_TABLE;
However, it seems the insert overwrite doesn't work in this way (also I am not sure if I can just add a new column to the table like this?). Is there a way I can incorporate it into this query or a different way to update the table with this new ACTIVE column within this query itself?
Also I am using INSERT ALL here because I actually have multiple different tables I am inserting into at once, but this is the current table that I am trying to modify.
You can use the overwrite option with conditional multi-table inserts.
Starting with your current statement:
INSERT ALL
WHEN HAS_DATA AND ID_SEQ_NUM > 1 AND (SELECT COUNT(1) FROM MY_TABLE WHERE ID = KEY) = 0 THEN
INTO MY_TABLE VALUES (
LINK_KEY,
TIME,
DATASET_NAME,
DATASET_DATE,
ORDER_NUMBER,
O_KEY,
OA_KEY
)
SELECT *
FROM TEST_TABLE;
Add the overwrite option immediately after the insert command:
INSERT OVERWRITE ALL
WHEN HAS_DATA AND ID_SEQ_NUM > 1 AND (SELECT COUNT(1) FROM MY_TABLE WHERE ID = KEY) = 0 THEN
INTO MY_TABLE VALUES (
LINK_KEY,
TIME,
DATASET_NAME,
DATASET_DATE,
ORDER_NUMBER,
O_KEY,
OA_KEY
)
SELECT *
FROM TEST_TABLE;
Note that this will truncate and insert ALL tables in the multi-table insert. There is not a way to be selective about which tables get truncated and inserted and which don't.
https://docs.snowflake.com/en/sql-reference/sql/insert-multi-table.html#optional-parameters
I have the following MERGE statement.Is there a way to convert this into an update statement without using MERGE?
MERGE INTO tab1
USING (SELECT tab1.col1, tab2.col2
FROM tab1, tab2
WHERE tab1.col1 = tab2.col1) tab3
ON (tab1.col1 = tab3.col1)
WHEN MATCHED THEN UPDATE SET col2 = tab3.col2
What you are asking about is called "update through join", and contrary to a widely held belief, it is possible in Oracle. But there is a catch.
Obviously, the update - no matter how you attempt to perform it - is not well defined unless column col1 is unique in table tab2. That column is used for lookup in the update process; if its values are not unique, the update will be ambiguous. I ignore here idiotic retorts such as "uniqueness is needed only for those values also found in tab1.col1", or "there is no ambiguity as long as all values in tab2.col2 are equal when the corresponding values in tab2.col1 are equal".
The "catch" is this. The uniqueness of tab2.col1 may be a matter of data (you know it when you inspect the data), or a matter of metadata (there is a unique constraint, or a unique index, or a PK constraint, etc., on tab2.col1, which the parser can inspect without ever looking at the actual data).
merge will work even when uniqueness is only known by inspecting the data. It will still throw an error if uniqueness is violated - but that will be a runtime error (only after the data in tab2 is accessed from disk). By contrast, updating through a join requires the same uniqueness to be known ahead of time, through the metadata (or in other ways: for example if the second rowset - not a table but the table-like result of a query - is the result of an aggregation grouping on the join column; then the uniqueness is guaranteed by the definition of "aggregation").
Here is a brief example to show the difference.
Test data:
create table tab1 (col1 number, col2 number);
insert into tab1 (col1, col2) values (1, 3);
create table tab2 (col1 number, col2 number);
insert into tab2 (col1, col2) values (1, 6);
commit;
merge statement (with check at the end):
merge into tab1
using(
select tab1.col1,
tab2.col2
from tab1,tab2
where tab1.col1 = tab2.col1) tab3
on(tab1.col1 = tab3.col1)
when matched then
update
set col2 = tab3.col2;
1 row merged.
select * from tab1;
COL1 COL2
---------- ----------
1 6
Now let's restore table tab1 to its original data for the next test(s):
rollback;
select * from tab1;
COL1 COL2
---------- ----------
1 3
Update through join - with no uniqueness guaranteed in the metadata (will result in error):
update
( select t1.col2 as t1_c2, t2.col2 as t2_c2
from tab1 t1 join tab2 t2 on t1.col1 = t2.col1
)
set t1_c2 = t2_c2;
Error report -
SQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table
01779. 00000 - "cannot modify a column which maps to a non key-preserved table"
*Cause: An attempt was made to insert or update columns of a join view which
map to a non-key-preserved table.
*Action: Modify the underlying base tables directly.
Now let's add a unique constraint on the lookup column:
alter table tab2 modify (col1 unique);
Table TAB2 altered.
and try the update again (with the same update statement), plus verification:
update
( select t1.col2 as t1_c2, t2.col2 as t2_c2
from tab1 t1 join tab2 t2 on t1.col1 = t2.col1
)
set t1_c2 = t2_c2;
1 row updated.
select * from tab1;
COL1 COL2
---------- ----------
1 6
So - you can do it, if you use the correct syntax (as I have shown here) AND - very important - you have a unique or PK constraint or a unique index on column tab2.col1.
I have a table named as T1 with few columns. Out of these columns, one column name is INSERT_DATE and its datatype is TIMESTAMP(3). I want to update the INSERT_DATE column with data "ABC" in all rows.
CREATE TABLE T1(NAME VARCHAR2(5), INSERT_DATE TIMESTAMP(3));
INSERT INTO T1 VALUES('NAVIN',CURRENT_TIMESTAMP);
INSERT INTO T1 VALUES('KAVIN',CURRENT_TIMESTAMP);
INSERT INTO T1 VALUES('TAVIN',CURRENT_TIMESTAMP);
I have queried in oracle like
UPDATE T1
SET INSERT_DATE = 'ABC';
COMMIT;
It is not getting updated. Is there anything I need to add a code.
I want to write a query which finds the difference between two tables and writes updates or new data into third table. My two tables have identical column names. Third table which captures changes have extra column called comment. I would like to insert the comment whether it is a new row or updated row based on the row modification.
**TABLE1 (BACKUP)**
KEY,FIRST_NAME,LAST_NAME,CITY
1,RAM,KUMAR,INDIA
2,TOM,MOODY,ENGLAND
3,MOHAMMAD,HAFEEZ,PAKISTAN
4,MONIKA,SAM,USA
5,MIKE,PALEDINO,USA
**TABLE2 (CURRENT)**
KEY,FIRST_NAME,LAST_NAME,CITY
1,RAM,KUMAR,USA
2,TOM,MOODY,ENGLAND
3,MOHAMMAD,HAFEEZ,PAKISTAN
4,MONIKA,SAM,INDIA
5,MIKE,PALEDINO,USA
6,MAHELA,JAYA,SL
**TABLE3 (DIFFERENCE FROM TABLE2 TO TABLE1)**
KEY,FIRST_NAME,LAST_NAME,CITY,COMMENT
1,RAM,KUMAR,USA,UPDATE
4,MONIKA,SAM,INDIA,UPDATE
6,MAHELA,JAYA,SL,INSERT
table scripts
DROP TABLE TABLE1;
DROP TABLE TABLE2;
DROP TABLE TABLE3;
CREATE TABLE TABLE1
(
KEY NUMBER,
FIRST_NAME VARCHAR2(100),
LAST_NAME VARCHAR2(100),
CITY VARCHAR2(50)
);
/
CREATE TABLE TABLE2
(
KEY NUMBER,
FIRST_NAME VARCHAR2(100),
LAST_NAME VARCHAR2(100),
CITY VARCHAR2(50)
);
/
CREATE TABLE TABLE3
(
KEY NUMBER,
FIRST_NAME VARCHAR2(100),
LAST_NAME VARCHAR2(100),
CITY VARCHAR2(50),
COMMENTS VARCHAR2(200)
);
/
INSERT ALL
INTO TABLE1
VALUES(1,'RAM','KUMAR','INDIA')
INTO TABLE1 VALUES(2,'TOM','MOODY','ENGLAND')
INTO TABLE1 VALUES(3,'MOHAMMAD','HAFEEZ','PAKISTAN')
INTO TABLE1 VALUES(4,'MONIKA','SAM','USA')
INTO TABLE1 VALUES(5,'MIKE','PALEDINO','USA')
SELECT 1 FROM DUAL;
/
INSERT ALL
INTO TABLE2
VALUES(1,'RAM','KUMAR','USA')
INTO TABLE2 VALUES(2,'TOM','MOODY','ENGLAND')
INTO TABLE2 VALUES(3,'MOHAMMAD','HAFEEZ','PAKISTAN')
INTO TABLE2 VALUES(4,'MONIKA','SAM','INDIA')
INTO TABLE2 VALUES(5,'MIKE','PALEDINO','USA')
INTO TABLE2 VALUES(6,'MAHELA','JAYA','SL')
SELECT 1 FROM DUAL;
I was using the merge statement to accomplish the same. but i have hit a roadblock in merge statement , it's rhrowing an error "SQL Error: ORA-00905: missing keyword
00905. 00000 - "missing keyword"" I dont understand where is the error. please help
INSERT INTO TABLE3
SELECT KEY,FIRST_NAME,LAST_NAME,CITY,NULL AS COMMENTS FROM TABLE2
MINUS
SELECT KEY,FIRST_NAME,LAST_NAME,CITY,NULL AS COMMENTS FROM TABLE1
;
MERGE INTO TABLE3 A
USING TABLE1 B
ON (A.KEY=B.KEY)
WHEN MATCHED THEN
UPDATE SET A.COMMENTS='UPDATED'
WHEN NOT MATCHED THEN
UPDATE SET A.COMMENTS='INSERTED';
There is no such WHEN NOT MATCHED THEN UPDATE clause, you should use WHEN NOT MATCHED THEN INSERT. Refer to MERGE for details.
A few assumptions made about the data:
An INSERT event will be a record identified by its key in table2 (current data) that does not have a matching key in the original back-up table: table1.
An UPDATE event is a field that exists in both table1 and table2 for the same KEY but is not the same.
Records which did not change between tables are not to be recorded in table3.
Example Query: Check for Updates
SELECT UPD_QUERY.NEW_CITY, 'UPDATED' as COMMENTS
FROM (SELECT CASE WHEN REPLACE(CURR.CITY, BKUP.CITY,'') IS NOT NULL THEN CURR.CITY
ELSE NULL END as NEW_CITY
FROM table1 BKUP, table2 CURR
WHERE BKUP.KEY = CURR.KEY) UPD_QUERY
WHERE UPD_QUERY.NEW_CITY is NOT NULL;
You can repeat this comparison method for the other fields:
SELECT UPD_QUERY.*
FROM (SELECT CURR.KEY,
CASE WHEN REPLACE(CURR.FIRST_NAME, BKUP.FIRST_NAME,'') IS NOT NULL
THEN CURR.FIRST_NAME
ELSE NULL END as FIRST_NAME,
CASE WHEN REPLACE(CURR.LAST_NAME, BKUP.LAST_NAME,'') IS NOT NULL
THEN CURR.LAST_NAME
ELSE NULL END as LAST_NAME,
CASE WHEN REPLACE(CURR.CITY, BKUP.CITY,'') IS NOT NULL
THEN CURR.CITY
ELSE NULL END as CITY
FROM table1 BKUP, table2 CURR
WHERE BKUP.KEY = CURR.KEY) UPD_QUERY
WHERE COALESCE(UPD_QUERY.FIRST_NAME, UPD_QUERY.LAST_NAME, UPD_QUERY.CITY)
is NOT NULL;
NOTE: This could get unwieldy very quickly if the number of columns compared are many. Since the target table design (table3) requires not only identification of a change, but the field and its new value are also recorded.
Example Query: Look for Newly Added Records
SELECT CURR.*, 'INSERTED' as COMMENTS
FROM table2 CURR, table1 BKUP
WHERE CURR.KEY = BKUP.KEY(+)
AND BKUP.KEY is NULL;
Basically MERGE forces the operation: MATCHED=UPDATE (or DELETE), NOT MATCHED = INSERT. It's in the docs.
You can do what you want but you need two insert statements with different set operators,
For UPDATED:
Insert into table3
table1 INTERSECT table2
For INSERTED:
Insert into table3
table2 MINUS table1
I need to update a query so that it checks that a duplicate entry does not exist before insertion. In MySQL I can just use INSERT IGNORE so that if a duplicate record is found it just skips the insert, but I can't seem to find an equivalent option for Oracle. Any suggestions?
If you're on 11g you can use the hint IGNORE_ROW_ON_DUPKEY_INDEX:
SQL> create table my_table(a number, constraint my_table_pk primary key (a));
Table created.
SQL> insert /*+ ignore_row_on_dupkey_index(my_table, my_table_pk) */
2 into my_table
3 select 1 from dual
4 union all
5 select 1 from dual;
1 row created.
Check out the MERGE statement. This should do what you want - it's the WHEN NOT MATCHED clause that will do this.
Do to Oracle's lack of support for a true VALUES() clause the syntax for a single record with fixed values is pretty clumsy though:
MERGE INTO your_table yt
USING (
SELECT 42 as the_pk_value,
'some_value' as some_column
FROM dual
) t on (yt.pk = t.the_pke_value)
WHEN NOT MATCHED THEN
INSERT (pk, the_column)
VALUES (t.the_pk_value, t.some_column);
A different approach (if you are e.g. doing bulk loading from a different table) is to use the "Error logging" facility of Oracle. The statement would look like this:
INSERT INTO your_table (col1, col2, col3)
SELECT c1, c2, c3
FROM staging_table
LOG ERRORS INTO errlog ('some comment') REJECT LIMIT UNLIMITED;
Afterwards all rows that would have thrown an error are available in the table errlog. You need to create that errlog table (or whatever name you choose) manually before running the insert using DBMS_ERRLOG.CREATE_ERROR_LOG.
See the manual for details
I don't think there is but to save time you can attempt the insert and ignore the inevitable error:
begin
insert into table_a( col1, col2, col3 )
values ( 1, 2, 3 );
exception when dup_val_on_index then
null;
end;
/
This will only ignore exceptions raised specifically by duplicate primary key or unique key constraints; everything else will be raised as normal.
If you don't want to do this then you have to select from the table first, which isn't really that efficient.
Another variant
Insert into my_table (student_id, group_id)
select distinct p.studentid, g.groupid
from person p, group g
where NOT EXISTS (select 1
from my_table a
where a.student_id = p.studentid
and a.group_id = g.groupid)
or you could do
Insert into my_table (student_id, group_id)
select distinct p.studentid, g.groupid
from person p, group g
MINUS
select student_id, group_id
from my_table
A simple solution
insert into t1
select from t2
where not exists
(select 1 from t1 where t1.id= t2.id)
This one isn't mine, but came in really handy when using sqlloader:
create a view that points to your table:
CREATE OR REPLACE VIEW test_view
AS SELECT * FROM test_tab
create the trigger:
CREATE OR REPLACE TRIGGER test_trig
INSTEAD OF INSERT ON test_view
FOR EACH ROW
BEGIN
INSERT INTO test_tab VALUES
(:NEW.id, :NEW.name);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN NULL;
END test_trig;
and in the ctl file, insert into the view instead:
OPTIONS(ERRORS=0)
LOAD DATA
INFILE 'file_with_duplicates.csv'
INTO TABLE test_view
FIELDS TERMINATED BY ','
(id, field1)
How about simply adding an index with whatever fields you need to check for dupes on and say it must be unique? Saves a read check.
yet another "where not exists"-variant using dual...
insert into t1(id, unique_name)
select t1_seq.nextval, 'Franz-Xaver' from dual
where not exists (select 1 from t1 where unique_name = 'Franz-Xaver');