Inserting Record Type - oracle

I have a source table, and a history table. They are identical, except the history table has a notes column as the last column.
The table is large, 55 columns, which I do not want to list all the columns. We do not want to use a trigger on this, but just create the history entry in the code itself.
If I simply do an INSERT INTO <history> SELECT * FROM <source> WHERE...... I will get "not enough values".
I'm hoping to do something of this nature: (Note this is just an anonymous block)
DECLARE
v_old_rec company_table%ROWTYPE;
BEGIN
SELECT * INTO v_old_rec
FROM company_table
WHERE company_id = 32789;
INSERT INTO company_table_hist
VALUES v_old_rec || ',MONTHLY UPDATE';
END;
Anything like this possible, so I do not have to list 55 columns?
Thank you.

It's quite simple actually:
INSERT INTO company_table_hist
(SELECT t.*, MONTHLY_UPD
FROM company_table t
WHERE company_id = 32789);
Of course monthly_upd must be bound somehow

Related

Inserting Row Number based on existing value in the table

I have a requirement that I need to insert row number in a table based on value already present in the table. For example, the max row_nbr record in the current table is something like this:
+----------+----------+------------+---------+
| FST_NAME | LST_NAME | STATE_CODE | ROW_NBR |
+----------+----------+------------+---------+
| John | Doe | 13 | 123 |
+----------+----------+------------+---------+
Now, I need to insert more records, with given FST_NAME and LST_NAME values. ROW_NBR needs to be generated while inserting the data into table with values auto-incrementing from 123.
I can't use a sequence, as my loading process is not the only process that inserts data into this table. And I can't use a cursor as well, as due to high volume of data the TEMP space gets filled up quickly. And I'm inserting data as given below:
insert into final_table
( fst_name,lst_name,state_code)
(select * from staging_table
where state_code=13);
Any ideas how to implement this?
It sounds like other processes are finding the current maximum row_nbr value and incrementing it as they do single-row inserts in a cursor loop.
You could do something functionally similar, either finding the maximum in advance and incrementing it (if you're already running this in a PL/SQL block):
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, variable_holding_maximum + rownum
from staging_table st
where st.state_code=13;
or by querying the table as part of the query, which doesn't need PL/SQL:
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, (select max(row_nbr) from final_table) + rownum
from staging_table st
where st.state_code=13;
db<>fiddle
But this isn't a good solution because it doesn't prevent clashes from different processes and sessions trying to insert at the same time; but neither would the cursor loop approach, unless it is catching unique constraint errors and re-attempting with a new value, perhaps.
It would be better to use a sequence, which would be an auto-increment column but you said you can't change the table structure; and you need to let the other processes continue to work without modification. You can still do that with a sequence and trigger approach, having the trigger always set the row_nbr value form the sequence, regardless of whether the insert statement supplied a value.
If you create a sequence that starts from the current maximum, with something like:
create sequence final_seq start with <current max + 1>
or without manually finding it:
declare
start_with pls_integer;
begin
select nvl(max(row_nbr), 0) + 1 into start_with from final_table;
execute immediate 'create sequence final_seq start with ' || start_with;
end;
/
then your trigger could just be:
create trigger final_trig
before insert on final_table
for each row
begin
:new.row_nbr := final_seq.nextval;
end;
/
Then your insert ... select statement doesn't need to supply or even think about the row_nbr value, so you can leave it as you have it now (except I'd avoid select * even in that construct, and list the staging table columns explicitly); and any existing inserts that do supply the row_nbr don't need to be modified and the value they supply will just be overwritten from the sequence.
db<>fiddle showing inserts with and withouth row_nbr specified.

How do you Save calculated columns from one table to another table using Oracle APEX?

As the question states, I'm trying to save calculated data, that is the result of a select statement, to another table. In this Image, the column with green outline is a database column and the columns with red outline are calculated based on that column, I want to save the Red outlined columns to another table where the column names would be same.
This looks like a classic report. Is it? If so, it is result of a select statement. As it calculates all values you're interested in, you'd use it in an insert statement. For example, you could create a button and create a process that fires when that button is pressed. It would then
insert into target_table (emp_id, salary, house_rent, ...)
select emp_id, ... whatever you select in report's query
from ...
However: data changes. What will you do when something - that is used to calculate those values - is changed? Will you delete those rows and insert new ones? Update existing values? Add yet another row?
If you'd update existing values, consider using merge as it is capable of inserting rows (in when not matched clause, in your case) , as well as updating rows (in when matched). That would look like this:
merge into target_table t
using (select emp_id, ... whatever you select in report's query
from ...
) x
on (t.emp_id = x.emp_id)
when matched then update set
t.salary = x.salary,
t.house_rent = x.house_rent,
...
when not matched then insert (emp_id, salary, house_rent, ...)
values (x.emp_id, x.salary, x.house_rent, ...);
You can use the INSERT INTO SELECT statement - plenty of examples available on google
INSERT INTO another_table (
emp_id,
col1,
col2
)
SELECT emp_id,
calculated_col1,
calculated_col2
FROM first_table

Auditing a table with many columns without Fine Grained Auditing

I have to create a trigger for a table with many columns and I want to now if is any possibility to avoid using the name of the column after :new and :old. Instead of specifically use the column name I want to use the element from collection with column names of target table (the table on which the trigger is set).
The line 25 is that with the binding error:
DBMS_OUTPUT.PUT_LINE('Updating customer id'||col_name(i)||to_char(:new.col_name(i)));
Bellow you can see my trigger:
CREATE OR REPLACE TRIGGER TEST_TRG BEFORE
INSERT OR
UPDATE ON ITEMS REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW DECLARE TYPE col_list IS TABLE OF VARCHAR2(60);
col_name col_list := col_list();
total INTEGER;
counter INTEGER :=0;
BEGIN
SELECT COUNT(*)
INTO total
FROM user_tab_columns
WHERE table_name = 'ITEMS';
FOR rec IN
(SELECT column_name FROM user_tab_columns WHERE table_name = 'ITEMS'
)
LOOP
col_name.extend;
counter :=counter+1;
col_name(counter) := rec.column_name;
dbms_output.put_line(col_name(counter));
END LOOP;
dbms_output.put_line(TO_CHAR(total));
FOR i IN 1 .. col_name.count
LOOP
IF UPDATING(col_name(i)) THEN
DBMS_OUTPUT.PUT_LINE('Updating customer id'||col_name(i)||to_char(:new.col_name(i)));
END IF;
END LOOP;
END;
Sincerely,
After digging more I have found that is not possible to dynamically reference the :new.column_name or :old.column_name values in a trigger. Due to this I will use my code only to INSERT (it does not have an old value :-() and I will do some code in java to generate UPDATE statements.
I must refine my previous answer based on what has been said by Justin Cave and also my findings. We can create a dynamic list of values triggered by INSERTING and UPDATING, based on referencing clause (old and new). For example I have created 2 collections of type nested table with varchars. One collection will contain all column tabs, as strings, that I will use for auditing and another collection will contains values for that columns with binding reference (ex. :new.). After INSERTING predicate I have created a index by collection (an associative array) of strings with ID taken from list of strings with column tab name and the value taken from the list of values for that columns referenced by new. Due to the index by collection you have a full working dynamic list at your disposal. Good luck :-)

Oracle order by varchar2 with numeric values in it

I have an issue where a Oracle DB column(say 'REF_NO') is VARCHAR2 and carries values similar to the ones below
If I do an ORDER BY REF_NO I get this:
LET-2-1
LET-2-10
LET-2-11
LET-2-2
LET-2-3
Which makes sense because the values are being treated as characters. I have been asked to change this so that the returned results are ordered like this:
LET-2-1
LET-2-2
LET-2-3
LET-2-10
LET-2-11
I cannot guarantee the format of these values either so I cannot really see how I can use regex or sub-string as it's a completely free text entry for users to enter values. The example above just happens to be what the requested data looks like. Other data could be completely different.
I cannot see how this is possible, so was hoping for some suggestions.
Additional information
To add to the complexity, here are some more examples from other customers:
Customer 1: OB 12, WE-11, WAN-001
Customer 2: P4, D1, W9
Customer 3: NTT-33A, RLC-33L, ARR-129B
Here are the steps
create table test01 (c varchar2(30));
insert into test01 values ('LET-2-1');
insert into test01 values ('LET-2-10');
insert into test01 values ('LET-2-2');
Query
select * from test01
order by to_number(SUBSTR(C,INSTR(C,'-',1)+1,INSTR(C,'-',1,2)-INSTR(C,'-',1)-1)),
to_number(SUBSTR(C,INSTR(C,'-',-1)+1));
Assuming all the values are having structure of -- - in the above query I try to extract numbers and converted to number in order by clause.
select * from TABLE
ORDER BY LENGTH(REF_NO), REF_NO
first you need to order by length and then order by REF_NO

Pattern to substitute for MERGE INTO Oracle syntax when not allowed

I have an application that uses the Oracle MERGE INTO... DML statement to update table A to correspond with some of the changes in another table B (table A is a summary of selected parts of table B along with some other info). In a typical merge operation, 5-6 rows (out of 10's of thousands) might be inserted in table B and 2-3 rows updated.
It turns out that the application is to be deployed in an environment that has a security policy on the target tables. The MERGE INTO... statement can't be used with these tables (ORA-28132: Merge into syntax does not support security policies)
So we have to change the MERGE INTO... logic to use regular inserts and updates instead. Is this a problem anyone else has run into? Is there a best-practice pattern for converting the WHEN MATCHED/WHEN NOT MATCHED logic in the merge statement into INSERT and UPDATE statements? The merge is within a stored procedure, so it's fine for the solution to use PL/SQL in addition to the DML if that is required.
Another way to do this (other than Merge) would be using two sql statements one for insert and one for update. The "WHEN MATCHED" and "WHEN NOT MATCHED" can be handled using joins or "in" Clause.
If you decide to take the below approach, it is better to run the update first (sine it only runs for the matching records) and then insert the non-Matching records. The Data sets would be the same either way, it just updates less number of records with the order below.
Also, Similar to the Merge, this update statement updates the Name Column even if the names in Source and Target match. If you dont want that, add that condition to the where as well.
create table src_table(
id number primary key,
name varchar2(20) not null
);
create table tgt_table(
id number primary key,
name varchar2(20) not null
);
insert into src_table values (1, 'abc');
insert into src_table values (2, 'def');
insert into src_table values (3, 'ghi');
insert into tgt_table values (1, 'abc');
insert into tgt_table values (2,'xyz');
SQL> select * from Src_Table;
ID NAME
---------- --------------------
1 abc
2 def
3 ghi
SQL> select * from Tgt_Table;
ID NAME
---------- --------------------
2 xyz
1 abc
Update tgt_Table tgt
set Tgt.Name =
(select Src.Name
from Src_Table Src
where Src.id = Tgt.id
);
2 rows updated. --Notice that ID 1 is updated even though value did not change
select * from Tgt_Table;
ID NAME
----- --------------------
2 def
1 abc
insert into tgt_Table
select src.*
from Src_Table src,
tgt_Table tgt
where src.id = tgt.id(+)
and tgt.id is null;
1 row created.
SQL> select * from tgt_Table;
ID NAME
---------- --------------------
2 def
1 abc
3 ghi
commit;
There could be better ways to do this, but this seems simple and SQL-oriented. If the Data set is Large, then a PL/SQL solution won't be as performant.
There are at least two options I can think of aside from digging into the security policy, which I don't know much about.
Process the records to merge row by row. Attempt to do the update, if it fails to update then insert, or vise versa, depending on whether you expect most records to need updating or inserting (ie optimize for the most common case that will reduce the number of SQL statements fired), eg:
begin
for row in (select ... from source_table) loop
update table_to_be_merged
if sql%rowcount = 0 then -- no row matched, so need to insert
insert ...
end if;
end loop;
end;
Another option may be to bulk collect the records you want to merge into an array, and then attempted to bulk insert them, catching all the primary key exceptions (I cannot recall the syntax for this right now, but you can get a bulk insert to place all the rows that fail to insert into another array and then process them).
Logically a merge statement has to check for the presence of each records behind the scenes anyway, and I think it is processed quite similarly to the code I posted above. However, merge will always be more efficient than coding it in PLSQL as it will be only 1 SQL call instead of many.

Resources