I have procedure that insert city name (and also create next id). How to not let insert city name to table if the name is already exists? Thanks!
There is my code:
create or replace
PROCEDURE PlaceName(
town IN City.Name%TYPE)
AS
id_C City.Id_City%TYPE;
BEGIN
SELECT NVL(Max(c.Id_City)+1,1) INTO id_C
FROM City c;
INSERT INTO City
VALUES(id_C, town);
End;
I agree with Ben that there should be a UNIQUE constraint on the table (assume that's a valid constraint), but this can be done much simpler with a MERGE statement:
MERGE INTO city c
USING ( SELECT 1 FROM dual )
ON c.name = town
WHEN NOT MATCHED THEN INSERT ( id, name )
VALUES ( my_sequence.NEXTVAL, town );
The USING clause is not really needed here, but it's mandatory for a merge statement.
No, don't insert only if it doesn't exist. This requires two operations. You have to check whether it exists and then you have to insert the record.
The correct way to this is to create a unique constraint on your table. You can do this inline as stated in the documentation or if your table already exists you can ALTER it to add the constraint:
ALTER TABLE table_name
add CONSTRAINT constraint_name UNIQUE (city);
You then catch the exception raised when inserting a city that already exists and then do whatever you wish with the information gained.
You're also incrementing your ID incorrectly. You should be using a SEQUENCE, which saves you another SELECT.
CREATE SEQUENCE city_seq
START WITH <current max ID>
INCREMENT BY 1;
Your procedure then becomes:
create or replace procedure PlaceName (
town in city.name%type ) is
begin
insert into city
values(city_seq.nextval, town);
-- Catch the raised exception if the city already exists.
exception when dup_val_on_index then
<do something>;
end;
Look up the Oracle MERGE command.
insert into City
select seq.nextval, town
from dual where not exists (select 1 from City where name = town);
I STRICTLY recommend to use sequences for artificial keys.
"select nvl(max())" is very-very bad.
If you need an explanation, ask me :)
Related
I've two table in the database, the first one is Person and the second is Pilot. as following:
Person Table:
CREATE TABLE person(
person_id NUMBER PRIMARY KEY,
last_name VARCHAR2(30) NOT NULL,
first_name VARCHAR2(30) NOT NULL,
hire_date VARCHAR2(30) NOT NULL,
job_type CHAR NOT NULL,
job_status CHAR NOT NULL
);
/
INSERT INTO person VALUES (1000, 'Smith', 'Ryan', '04-MAY-90','F', 'I');
INSERT INTO person VALUES (1170, 'Brown', 'Dean', '01-DEC-92','P', 'A');
INSERT INTO person VALUES (2010, 'Fisher', 'Jane', '12-FEB-95','F', 'I');
INSERT INTO person VALUES (2080, 'Brewster', 'Andre', '28-JUL-98', 'F', 'A');
INSERT INTO person VALUES (3190, 'Clark', 'Dan', '04-APR-01','P', 'A');
INSERT INTO person VALUES (3500, 'Jackson', 'Tyler', '01-NOV-05', 'F', 'A');
INSERT INTO person VALUES (4000, 'Miller', 'Mary', '11-JAN-08', 'F', 'A');
INSERT INTO person VALUES (4100, 'Jackson', 'Peter', '08-AUG-11', 'P','I');
INSERT INTO person VALUES (4200, 'Smith', 'Ryan', '08-DEC-12', 'F','A');
COMMIT;
/
Pilot Table:
CREATE TABLE pilot(
person_id NUMBER PRIMARY KEY,
pilot_type VARCHAR2(100) NOT NULL,
CONSTRAINT fk_person_pilot FOREIGN KEY (person_id)
REFERENCES person(person_id)
);
/
INSERT INTO pilot VALUES (1170, 'Commercial pilot');
INSERT INTO pilot VALUES (2010, 'Airline transport pilot');
INSERT INTO pilot VALUES (3500, 'Airline transport pilot');
COMMIT;
/
I'm asked to write a pl/sql block of code that accepts the last name from the user and return the result as following:
1) if the last name is not in the table, it returns all the rows in the table.
2) if the last name is in the table, it shows all of the employee's information.
So far I'm doing well with the code, but I got stuck in the case that there are two employees with the last name. here is the cursor that I wrote:
cursor person_info is
select last_name, first_name, hire_date, job_type, job_status, nvl(pilot_type, '-----------')
from person
full outer join pilot
on person.person_id = pilot.person_id
where upper(last_name) = upper(v_last_name)
group by last_name, first_name, hire_date, job_type, job_status, pilot_type
order by last_name, first_name, hire_date asc;
Logically, there are three cases to be covered:
the first case, when the entered last name is in the table, I return all the rows in the table and that's done.
The second case when there is only one employee with the entered last name, and this case is done as well. The last case when there are more than one employee having the same last name like for example 'Jackson' or 'Smith' in this case, my program crashes and give me the error that my select into statement returns more than one row.
select person_id
into v_n
from person
where upper(last_name) = upper(v_last_name);
if v_n = 1 then
open person_info;
fetch person_info into v_last_name, v_first_name, v_hire_date, v_job_type, v_job_status, v_pilot_type;
Can someone help me in guiding me how to fetch the data correctly? I'm not allowed to create any temporary tables or views.
I'm so sorry for making the problem longer than it should, but I was trying to be as clear as possible in explaining the problem.
Thank you in advance.
if the error is
"ORA-01422 exact fetch returns more than requested number of rows" then I think your answer is here https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:981494932508
If you EXPECT the query to return more than one row, you would code:
for x in ( select * from t where ... )
loop
-- process the X record here
end loop;
Your immediate issue is that you're selecting the matching person_id into a variable, and then seeing if that specific ID is 1. You don't have an actual ID 1 anyway so that check would never match; but it is that querying matching multiple rows that gets the error, as you can't put two matching IDs into a single scalar variable.
The way you've structured it looks like you are trying to count how many matching rows there are, rather than looking for a specific ID:
select count(person_id)
into v_n
from person
where upper(last_name) = upper(v_last_name);
if v_n = 1 then
....
When you do have multiple matches then you will need to use the same mechanism to return all of those as you do when there are no matches and you return all employees. You may find the logic should be in the cursor query rather then in PL/SQL logic. It depends on the details of the assignment though, and how it expects you to return the data in both (or all three) scenarios.
It's also possible you just aren't expected to hit this problem - it isn't clear if the assignment is finding all employees, or only those that are pilots. The issue still exists in general, but with the data you show there aren't any duplicate pilot last names. If you haven't learned about this kind of error yet perhaps you're getting a bit ahead of what your tutor expects.
I am using a PL/SQL procedure for inserting values from XML to relational tables. The XML file resides in an XMLTYPE column.
Columns of table (OFFLINE_XML) containing XML are
ID, XML_FILE, STATUS
There are two table in which i want to insert the values i.e. DEPARTMENT and SECTIONS
Structure of DEPARTMENT is as under:-
ID, NAME
Structure of SECTIONS table is:-
ID, NAME, DEPARTMENT_ID
Now there is a third table (LIST_1) in which i want to insert the values which already exists in both the above mentioned tables.
Structure of LIST_1 is :-
ID, DEPARTMENT_ID,DEPARTMENT_NAME,SECTIONS_ID, SECTIONS_NAME
XML format is as under:-
<ROWSET>
<DEPARTMENT>
<DEPARTMENT_ID>DEP22681352268280797</DEPARTMENT_ID>
<DEPARTMENT_NAME>myDEPARTMENT</DEPARTMENT_NAME>
<SECTIONS_ID>6390135666643567</SECTIONS_ID>
<SECTIONS_NAME>mySection</SECTIONS_NAME>
</DEPARTMENT>
<DEPARTMENT>
<DEPARTMENT_ID>DEP255555555550797</DEPARTMENT_ID>
<DEPARTMENT_NAME>myDEPARTMENT2</DEPARTMENT_NAME>
<SECTIONS_ID>63901667779243567</SECTIONS_ID>
<SECTIONS_NAME>mySection2</SECTIONS_NAME>
</DEPARTMENT>
</ROWSET>
DECLARE
BEGIN
insert all
into department (id, name)
values (unit_id, unit_name)
into sections (id, name, department _id)
values ( sect_id, sect_name, department _id)
select department .id as department _id
, department.name as department_name
, sect.id as sect_id
, sect.name as sect_name
from OFFLINE_XML
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'UNIT_ID'
, "NAME" varchar2(20) path 'UNIT_NAME'
) department
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'SECTIONS_ID'
, "NAME" varchar2(20) path 'SECTIONS_NAME'
) sect
where status = 3;
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
dbms_output.put_line('Duplicate='|| department.id );
--insert into LIST_1 values(ID,DEPARTMENT_ID, SECTIONS_ID, DEPARTMENT_NAME,SECTIONS_NAME);
END;
Now the problem is that how can i insert or identify the values on the basis of primary key which already exists in table DEPARTMENT and SECTIONS and thereafter insert the existing values in LIST_1 table.
------An updated effort --------------
I came up with another solution but this again is giving me problem. In the under mentioned procedure cursor tends to repeat for every xquery. I don't know how am i going to handle this issue..
DECLARE
department_id varchar2(20);
department_name varchar2(20);
sect_id varchar2(20);
sect_name varchar2(20);
sections_unit_id varchar2(20);
var number;
CURSOR C1 IS
select
sect.id as sect_id
, sect.name as sect_name
, sect.unit_id as sections_unit_id
from OFFLINE_XML
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'UNIT_ID'
, "NAME" varchar2(20) path 'UNIT_NAME'
) DEPARTMENT
, xmltable('/ROWSET/DEPARTMENT'
passing OFFLINE_XML.xml_file
columns
"ID" varchar2(20) path 'SECTIONS_ID'
, "NAME" varchar2(20) path 'SECTIONS_NAME'
, "DEPARTMENT_ID" varchar2(20) path 'DEPARTMENT_ID'
) sect
where status = 3;
BEGIN
FOR R_C1 IN C1 LOOP
BEGIN
var :=1;
--insert into sections_temp_1 (id, name)values ( R_C1.sect_id, R_C1.sect_name);
-- commit;
dbms_output.put_line('Duplicate='||var);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
dbms_output.put_line('Duplicate='||R_C1.sect_id);
END;
var:=var+1;
END LOOP;
END;
Seems that first of all you need a little bit more complicated XQuery to extract rows from XMLType field.
There are no need to extract sections and departments separately and after that try to match it back.
Try this variant:
select
department_id,
department_name,
sections_id,
sections_name
from
OFFLINE_XML xml_list,
xmltable(
'
for $dept in $param/ROWSET/DEPARTMENT
return $dept
'
passing xml_list.xml_file as "param"
columns
"DEPARTMENT_ID" varchar2(100) path '//DEPARTMENT/DEPARTMENT_ID',
"DEPARTMENT_NAME" varchar2(4000) path '//DEPARTMENT/DEPARTMENT_NAME',
"SECTIONS_ID" varchar2(100) path '//DEPARTMENT/SECTIONS_ID',
"SECTIONS_NAME" varchar2(4000) path '//DEPARTMENT/SECTIONS_NAME'
) section_list
where
xml_list.Status = 3
SQL fiddle - 1
After that you got a dataset which can be outer joined to existing tables on it's primary keys (or something other - depends on required logic) if you want to find if any values already exists:
select
offline_set.offline_xml_id,
offline_set.department_id,
offline_set.department_name,
offline_set.sections_id,
offline_set.sections_name,
nvl2(dept.id,'Y', 'N') is_dept_exists,
nvl2(sect.id,'Y', 'N') is_sect_exists
from
(
[... skipped text of previous query ...]
) offline_set,
department dept,
sections sect
where
dept.id (+) = offline_set.department_id
and
sect.id (+) = offline_set.sections_id
SQL fiddle - 2
Because I actually unaware about logic behind this requirements, I can't suggest any future processing instructions. But it seems that you missed reference to OFFLINE_XML table in LIST_1 which needed to identify source of errors/duplicates.
The best way to do this would be with Oracle's built in error logging. Use DBMS_ERRLOG.CREATE_ERROR_LOG() to generate a logging table for each target table (i.e. SECTION and DEPARTMENT in your case). Find out more.
The syntax for using these tables with INSERT ALL is not intuitive but this is what to do:
insert all
into department (id, name)
values (unit_id, unit_name)
log errors into err$_department ('XML Load failure')
into sections (id, name, department_id)
values ( sect_id, sect_name, department_id)
log errors into err$_section ('XML Load failure')
select department.id as department_id
....
You can put any (short-ish) string into the error log label, but make sure it's something which will help you local the relevant records. You may wish to set the REJECT LIMIT to some value depending on whether you wish to fail on one (or a couple of) error, or process the whole XML and sort it out afterwards. Find out more.
I suggest you use separate logs for each target tables rather one log for both for two reasons:
In my expereince solutions which leverage Oracle's built-in feartures tend to scale better and be more robust than hand-rolled code.
It's a better fit for what might happen. You have three circumstances which might cause loading to hurl DUP_VAL_ON_INDEX:
Record has duplicate Department ID
Record has duplicate Section ID
Record has duplicate Department ID and duplicate Section ID
Separate tables make it easier to understand what's gone awry. This is a major boon when loading large amounts of data.
"i need to inform my user that this much of duplicate entries were
found in xml"
You can still do that with two error logs. Heck, you can even join the error logs into a view called LIST_1 is that is so very important to you.
Do want to create Stored procc which updates or inserts into table based on the condition if current line does not exist in table?
This is what I have come up with so far:
PROCEDURE SP_UPDATE_EMPLOYEE
(
SSN VARCHAR2,
NAME VARCHAR2
)
AS
BEGIN
IF EXISTS(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN)
--what ? just carry on to else
ELSE
INSERT INTO pb_mifid (ssn, NAME)
VALUES (SSN, NAME);
END;
Is this the way to achieve this?
This is quite a common pattern. Depending on what version of Oracle you are running, you could use the merge statement (I am not sure what version it appeared in).
create table test_merge (id integer, c2 varchar2(255));
create unique index test_merge_idx1 on test_merge(id);
merge into test_merge t
using (select 1 id, 'foobar' c2 from dual) s
on (t.id = s.id)
when matched then update set c2 = s.c2
when not matched then insert (id, c2)
values (s.id, s.c2);
Merge is intended to merge data from a source table, but you can fake it for individual rows by selecting the data from dual.
If you cannot use merge, then optimize for the most common case. Will the proc usually not find a record and need to insert it, or will it usually need to update an existing record?
If inserting will be most common, code such as the following is probably best:
begin
insert into t (columns)
values ()
exception
when dup_val_on_index then
update t set cols = values
end;
If update is the most common, then turn the procedure around:
begin
update t set cols = values;
if sql%rowcount = 0 then
-- nothing was updated, so the record doesn't exist, insert it.
insert into t (columns)
values ();
end if;
end;
You should not issue a select to check for the row and make the decision based on the result - that means you will always need to run two SQL statements, when you can get away with one most of the time (or always if you use merge). The less SQL statements you use, the better your code will perform.
BEGIN
INSERT INTO pb_mifid (ssn, NAME)
select SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN);
END;
UPDATE:
Attention, you should name your parameter p_ssn(distinguish to the column SSN ), and the query become:
INSERT INTO pb_mifid (ssn, NAME)
select P_SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = P_SSN);
because this allways exists:
SELECT * FROM tblEMPLOYEE a where a.ssn = SSN
In PL SQL, I'm writing a stored procedure that uses a DB link:
CREATE OR REPLACE PROCEDURE Order_Migration(us_id IN NUMBER, date_id in DATE)
as
begin
INSERT INTO ORDERS(order_id, company_id)
SELECT ORDER_ID_SEQ.nextval, COMPANY_ID
FROM ORDERS#SOURCE
WHERE USER_ID = us_id AND DUE_DATE = date_ID;
end;
It takes all orders done on a certain day, by a certain user and inserts them in the new database. It calls a sequence to makes sure there are no repeat PKs on the orders, and it works well.
However, I want the same procedure to do a second INSERT into another table that has order_id as a foreign key. So I need to add all the order_id's just created, and the data from SOURCE that matches:
INSERT INTO ORDER_COMPLETION(order_id, completion_dt)
SELECT ????, completion_dt
FROM ORDER_COMPLETION#SOURCE
How can I keep track of which order_id that was just created matches up to the one whose data I need to pull from the source database?
I looked into making a temporary table, but you can't create those in a procedure.
Other info: I'll be calling this procedure from a C# app I'm writing
I'm not sure that I follow the question. If there is an ORDERS table and an ORDER_COMPLETION table in the remote database, wouldn't there be some key on the source system that related those two tables? If that key is the ORDER_ID, why would you want to re-assign that key in your procedure? Wouldn't you want to maintain the ORDER_ID from the source system?
If you do want to re-assign the ORDER_ID locally, I would tend to think that you'd want to do something like
CREATE OR REPLACE PROCEDURE order_migration( p_user_id IN orders.user_id%type,
p_due_date IN orders.due_date%type )
AS
TYPE order_rec IS RECORD( new_order_id NUMBER,
old_order_id NUMBER,
company_id NUMBER,
completion_dt DATE );
TYPE order_arr IS TABLE OF order_rec;
l_orders order_arr;
BEGIN
SELECT order_id_seq.nextval,
o.order_id,
o.company_id,
oc.completion_dt
BULK COLLECT INTO l_orders
FROM orders#source o,
order_completion#source oc
WHERE o.order_id = oc.order_id
AND o.user_id = p_user_id
AND o.due_date = p_due_date;
FORALL i IN l_orders.FIRST .. l_orders.LAST
INSERT INTO orders( order_id, company_id )
VALUES( l_orders(i).new_order_id, l_orders(i).company_id );
FORALL i IN l_orders.FIRST .. l_orders.LAST
INSERT INTO order_completion( order_id, completion_dt )
VALUES( l_orders(i).new_order_id, l_orders(i).completion_dt );
END;
You could also do a single FOR loop with two INSERT statements rather than two FORALL loops. And if you're pulling a lot of data each time, you probably want to pull the data in chunks from the remote system by adding a loop and a LIMIT to the BULK COLLECT
There must be some link between the rows in ORDERS#SOURCE and ORDERS, and between ORDERS#SOURCE and ORDER_COMPLETION#SOURCE, so can you not use a join?
Something like:
INSERT INTO ORDER_COMPLETION(order_id, completion_dt)
SELECT o.order_id, ocs.completion_dt
FROM ORDER_COMPLETION#SOURCE ocs
JOIN ORDERS o ON o.xxx = ocs.xxx
I have a question regarding a unified insert query against tables with different data
structures (Oracle). Let me elaborate with an example:
tb_customers (
id NUMBER(3), name VARCHAR2(40), archive_id NUMBER(3)
)
tb_suppliers (
id NUMBER(3), name VARCHAR2(40), contact VARCHAR2(40), xxx, xxx,
archive_id NUMBER(3)
)
The only column that is present in all tables is [archive_id]. The plan is to create a new archive of the dataset by copying (duplicating) all records to a different database partition and incrementing the archive_id for those records accordingly. [archive_id] is always part of the primary key.
My problem is with select statements to do the actual duplication of the data. Because the columns are variable, I am struggling to come up with a unified select statement that will copy the data and update the archive_id.
One solution (that works), is to iterate over all the tables in a stored procedure and do a:
CREATE TABLE temp as (SELECT * from ORIGINAL_TABLE);
UPDATE temp SET archive_id=something;
INSERT INTO ORIGINAL_TABLE (select * from temp);
DROP TABLE temp;
I do not like this solution very much as the DDL commands muck up all restore points.
Does anyone else have any solution?
How about creating a global temporary table for each base table?
create global temporary table tb_customers$ as select * from tb_customers;
create global temporary table tb_suppliers$ as select * from tb_suppliers;
You don't need to create and drop these each time, just leave them as-is.
You're archive process is then a single transaction...
insert into tb_customers$ as select * from tb_customers;
update tb_customers$ set archive_id = :v_new_archive_id;
insert into tb_customers select * from tb_customers$;
insert into tb_suppliers$ as select * from tb_suppliers;
update tb_suppliers$ set archive_id = :v_new_archive_id;
insert into tb_suppliers select * from tb_suppliers$;
commit; -- this will clear the global temporary tables
Hope this helps.
I would suggest not having a single sql statement for all tables and just use and insert.
insert into tb_customers_2
select id, name, 'new_archive_id' from tb_customers;
insert into tb_suppliers_2
select id, name, contact, xxx, xxx, 'new_archive_id' from tb_suppliers;
Or if you really need a single sql statement for all of them at least precreate all the temp tables (as temp tables) and leave them in place for next time. Then just use dynamic sql to refer to the temp table.
insert into ORIGINAL_TABLE_TEMP (SELECT * from ORIGINAL_TABLE);
UPDATE ORIGINAL_TABLE_TEMP SET archive_id=something;
INSERT INTO NEW_TABLE (select * from ORIGINAL_TABLE_TEMP);