Could you please tell me how to compare differences between table and my select query and insert those results in separate table? My plan is to create one base table (name RESULT) by using select statement and populate it with current result set. Then next day I would like to create procedure which will going to compare same select with RESULT table, and insert differences into another table called DIFFERENCES.
Any ideas?
Thanks!
You can create the RESULT_TABLE using CTAS as follows:
CREATE TABLE RESULT_TABLE
AS SELECT ... -- YOUR QUERY
Then you can use the following procedure which calculates the difference between your query and data from RESULT_TABLE:
CREATE OR REPLACE PROCEDURE FIND_DIFF
AS
BEGIN
INSERT INTO DIFFERENCES
--data present in the query but not in RESULT_TABLE
(SELECT ... -- YOUR QUERY
MINUS
SELECT * FROM RESULT_TABLE)
UNION
--data present in the RESULT_TABLE but not in the query
(SELECT * FROM RESULT_TABLE
MINUS
SELECT ... );-- YOUR QUERY
END;
/
I have used the UNION and the difference between both of them in a different order using MINUS to insert the deleted data also in the DIFFERENCES table. If this is not the requirement then remove the query after/before the UNION according to your requirement.
-- Create a table with results from the query, and ID as primary key
create table result_t as
select id, col_1, col_2, col_3
from <some-query>;
-- Create a table with new rows, deleted rows or updated rows
create table differences_t as
select id
-- Old values
,b.col_1 as old_col_1
,b.col_2 as old_col_2
,b.col_3 as old_col_3
-- New values
,a.col_1 as new_col_1
,a.col_2 as new_col_2
,a.col_3 as new_col_3
-- Execute the query once again
from <some-query> a
-- Outer join to detect also detect new/deleted rows
full join result_t b using(id)
-- Null aware comparison
where decode(a.col_1, b.col_1, 1, 0) = 0
or decode(a.col_2, b.col_2, 1, 0) = 0
or decode(a.col_3, b.col_3, 1, 0) = 0;
Related
I'm playing around with array support in Oracle and hit a roadblock regarding array access within a SQL query. I'm using the following schema:
create type smallintarray as varray(10) of number(3,0);
create table tbl (
id number(19,0) not null,
the_array smallintarray,
primary key (id)
);
What I would like to do is get the id and the first element i.e. at index 1 of the array. In PostgreSQL I could write select id, the_array[1] from tbl t but I don't see how I could do that with Oracle. I read that array access by index is only possible in PL/SQL, which would be fine if I could return a "decorated cursor" to achieve the same result through JDBC, but I don't know if that's possible.
DECLARE
c1 SYS_REFCURSOR;
varr smallintarray2;
BEGIN
OPEN c1 FOR SELECT t.id, t.THE_ARRAY from tbl t;
-- SELECT t.THE_ARRAY INTO varr FROM table_with_enum_arrays2 t;
-- return a "decorated cursor" with varr(1) at select item position 1
dbms_sql.return_result(c1);
END;
You can do this in plain SQL; it's not pretty, but it does work. You would prefer that Oracle had syntax to hide this from the programmer (and perhaps it does, at least in the most recent versions; I am still stuck at 12.2).
select t.id, q.array_element
from tbl t cross apply
( select column_value as array_element,
rownum as ord
from table(the_array)
) q
where ord = 1
;
EDIT If order of generating the elements through the table operator is a concern, you could do something like this (in Oracle 12.1 and higher; otherwise the function can't be part of the query itself, but it can be defined on its own):
with
function select_element(arr smallintarray, i integer)
return number
as
begin
return arr(i);
end;
select id, select_element(the_array, 1) as the_array_1
from tbl
/
First of all, please don't do that on production. Use tables instead of storing arrays within a table.
Answer to your question is to use column as a table source
SELECT t.id, ta.*
from tbl t,
table(t.THE_ARRAY) ta
order by column_value
-- offset 1 row -- in case if sometime you'll need to skip a row
fetch first 1 row only;
UPD: as for ordering the array I can only say playing with 2asc/desc" parameters provided me with results I've expected - it has been ordered ascending or descending.
UPD2: found a cool link to description of performance issues might happen
I am using Toad for oracle 12c. I need to copy a table and data (40M) from one shcema to another (prod to test). However there is an unique key(not the PK for this table) called record_Id col which has something data like this 3.000*******19E15. About 2M rows has same numbers(I believe its because very large number) which are unique in prod. When I try to copy it violets the unique key of that col. I am using toad "export data to another schema" function to copy the data.
when I execute query in prod
select count(*) from table_name
OR
select count(distinct(record_id) from table_name
Both query gives the exact same numbers of data.
I don't have DBA permission. How do I copy all data without violating unique key of the table.
Thanks in advance!
You can use UPSERT for decisional INSERT or UPDATE or you may write small procedure for this.
you may consider to use NOT EXISTS, but your data is big and it might not be resource efficient.
insert into prod_tab
select * from other_tab t1 where NOT exists (
select 1 from prod_tab t2 where t1.id = t2.id
);
In Oracle you can use a MERGE query for that.
The following query proceeds as follows for each data row :
if the source record_id does not yet exist in the target table, a new record is inserted
else, the existing record is updated with source values
For the sake of the example, I assumed that there are two other columns in the table : column1 and column2.
MERGE INTO target_table t1
USING (SELECT * from source_table t2)
ON (t1.record_id = t2.record_id)
WHEN MATCHED THEN UPDATE SET
t1.column1 = t2.column1,
t1.column2 = t2.column2
WHEN NOT MATCHED THEN INSERT
(record_id, column1, column2) VALUES (t2.record_id, t2.column1, t2.column2)
I have a procedure where it has few select statements (from different tables) and the output of these select statements will be loaded into a temp table. All the records which are loaded in this temp table will be displayed as an output. Now I have a requirement where my procedure should not have this temp table.
Can you please let me know the options of achieving it?
Assuming that the SELECT queries have same number of COLUMNS and Datatype. Your best approach could be using union all and refcursor to display the output. Hope below snippet helps.
--You can try using nested table types here instead of using temp tables or simply UNIONALL
--Hope below example helps.
DECLARE
p_lst sys_refcursor;
BEGIN
--Assuming that all the SELECT statements have same number of columns as well as datatype
OPEN p_lst FOR
(SELECT 'AV',1 FROM DUAL
UNION ALL
SELECT'SH',2 FROM DUAL
UNION ALL
SELECT 'TK',3 FROM DUAL
);
END;
Assuming mysql...
You could do something like
UPDATE [table1] AS t1
INNER JOIN [table2] AS t2
ON t1.[col1] = t2.[col1]
SET t1.[col2] = t2.[col2];
Let say I have table my table has values(which they are varchar):
values
a
o
g
t
And I have insert a new value called V
values
V
a
o
g
t
Is there a way or query that can specify what is the last value was insert in the column ? the desired query : select * from dual where rown_num = count(*) -- just an example and the result will be V
Rows in a table have no inherent order. rownum is a pseudocolumn that's part of the select so it isn't useful here. There is no way to tell where in the storage a new row will physically be placed, so you can't rely on rowid, for example.
The only way to do this reliably is to have a timestamp column (maybe set by a trigger so you don't have to worry about it). That would let you order the rows by timestamp and find the row with the highest (most recent) timestamp.
You are still restricted by the precision of the timestamp, as I discovered creating a SQL Fiddle demo; without forcing a small gap between the inserts the timestamps were all the same, but then it only seems to support `timestamp(3). That probably won't be a significant issue in the real world, unless you're doing bulk inserts, but then the last row inserted is still a bit of an arbitrary concept.
As quite correctly pointed out in comments, if the actual time doesn't need to be know, a numeric field populated by a sequence would be more reliable and performant; another SQL Fiddle demo here, and this is the gist:
create table t42(data varchar2(10), id number);
create sequence seq_t42;
create trigger bi_t42
before insert on t42
for each row
begin
:new.id := seq_t42.nextval;
end;
/
insert into t42(data) values ('a');
insert into t42(data) values ('o');
insert into t42(data) values ('g');
insert into t42(data) values ('t');
insert into t42(data) values ('V');
select data from (
select data, row_number() over (order by id desc) as rn
from t42
)
where rn = 1;
Do want to create Stored procc which updates or inserts into table based on the condition if current line does not exist in table?
This is what I have come up with so far:
PROCEDURE SP_UPDATE_EMPLOYEE
(
SSN VARCHAR2,
NAME VARCHAR2
)
AS
BEGIN
IF EXISTS(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN)
--what ? just carry on to else
ELSE
INSERT INTO pb_mifid (ssn, NAME)
VALUES (SSN, NAME);
END;
Is this the way to achieve this?
This is quite a common pattern. Depending on what version of Oracle you are running, you could use the merge statement (I am not sure what version it appeared in).
create table test_merge (id integer, c2 varchar2(255));
create unique index test_merge_idx1 on test_merge(id);
merge into test_merge t
using (select 1 id, 'foobar' c2 from dual) s
on (t.id = s.id)
when matched then update set c2 = s.c2
when not matched then insert (id, c2)
values (s.id, s.c2);
Merge is intended to merge data from a source table, but you can fake it for individual rows by selecting the data from dual.
If you cannot use merge, then optimize for the most common case. Will the proc usually not find a record and need to insert it, or will it usually need to update an existing record?
If inserting will be most common, code such as the following is probably best:
begin
insert into t (columns)
values ()
exception
when dup_val_on_index then
update t set cols = values
end;
If update is the most common, then turn the procedure around:
begin
update t set cols = values;
if sql%rowcount = 0 then
-- nothing was updated, so the record doesn't exist, insert it.
insert into t (columns)
values ();
end if;
end;
You should not issue a select to check for the row and make the decision based on the result - that means you will always need to run two SQL statements, when you can get away with one most of the time (or always if you use merge). The less SQL statements you use, the better your code will perform.
BEGIN
INSERT INTO pb_mifid (ssn, NAME)
select SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN);
END;
UPDATE:
Attention, you should name your parameter p_ssn(distinguish to the column SSN ), and the query become:
INSERT INTO pb_mifid (ssn, NAME)
select P_SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = P_SSN);
because this allways exists:
SELECT * FROM tblEMPLOYEE a where a.ssn = SSN