PL/SQL Stored Procedure create tables - oracle

I've been tasked with improving old PL/SQL and Oracle SQL legacy code. In all there are around 7000 lines of code! One aspect of the existing code that really surprises me is the previous coder needlessly created hundreds of lines of code by not writing any procedures or functions - instead the coder essentially repeats the same code throughout.
For example, in the existing code there are literally 40 or more repetitions of the following SQL:
CREATE TABLE tmp_clients
AS
SELECT * FROM live.clients;
CREATE TABLE tmp_customers
AS
SELECT * FROM live.customers;
CREATE TABLE tmp_suppliers
AS
SELECT * FROM live.suppliers WHERE type_id = 1;
and many, many more.....
I'm very new to writing in PL/SQL, though I have recently purchased the excellent book "Oracle PL/SQL programming" by Steven Feuerstein. However, as far as I can tell, I should be able to write a callable procedure such as:
procedure create_temp_table (new_table_nme in varchar(60)
source_table in varchar(60))
IS
s_query varchar2(100);
BEGIN
s_query := 'CREATE TABLE ' + new_table_nme + 'AS SELECT * FROM ' + source_table;
execute immediate s_query;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -955 THEN
NULL;
ELSE
RAISE;
END IF;
END;
I would then simply call the procedure as follows:
create_temp_table('tmp.clients', 'live.clients');
create_temp_table('tmp.customers', 'live.customers');
Is my proposed approach reasonable given the problem as stated?
Are the datatypes in the procedure call reasonable, ie should varchar2(60) be used, or is it possible to force the 'source_table' parameter to be a table name in the schema? What happens if the table name is more than 60 characters?
I want to be able to pass a third non-required parameter in cases where the data has to be restricted in a trivial way, ie to deal with cases "WHERE type_id = 1". How do I modify the procedure to include a parameter that is only used occasionally and how would I modify the rest of the code. I would probably add some sort of IF/ELSE statement to check whether the third parameter was not NULL and then construct the s_query accordingly.
How would I check that the table has actually been created successfully?
I want to trap for two other exceptions, namely
The new table (eg 'tmp.clients') already exists; and
The source table doesn't exist.
Does the EXCEPTION as written handle these cases?
More generally, from where can I obtain the SQL error codes and their meanings?
Any suggested improvements to the code would be gratefully received.

You could get rid of a lot of code (gradually!) by using GLOBAL temporary tables.
Execute immediate is not a bad practice but if there are other options then they should be used. Global temp tables are common where you want to extract and transform data but once processed you don't need it anymore until the next load. Each user can only see the data they insert and no redo logs are generated. You can index the data for faster querying if required.
Something like this
-- Create table
create global temporary table GT_CLIENTS
(
id NUMBER(10) not null,
Client_id NUMBER(10) not null,
modified_by_id NUMBER(10),
transaction_id NUMBER(10),
local_transaction_id VARCHAR2(30) not null,
last_modified_date_tz TIMESTAMP(6) WITH TIME ZONE not null
)
on commit preserve rows;
I recommend the on commit preserve rows option so that you can debug your procedure and see what went into the table.
Usage would be
INSERT INTO GT_CLIENTS
SELECT * FROM live.clients;

If this is the route you want to take to minimize changes, then the error for source table does not exist is -942 which you will want to stop for rather than continuing as your temp table would not have been created. Similarly, just continuing if you get an object already exists error will be problematic as you will not have reloaded it with the new data - the create failed so the table still has the data from the last run. So I would definitely do some more thinking about your exception handler.
That said, I also concur that this is generally not the best way to do things. Creating and dropping objects in a multi-user environment is a disaster in the making, and seems a silly waste of resources when there are more appropriate options available.

Related

Problems inserting data into Oracle table with sequence column via SSIS

I am doing data insert into a table in Oracle which is having a sequence set to it in one of the columns say Id column. I would like to know how to do data loads into such tables.
I followed the below link -
It's possible to use OleDbConnections with the Script Component?
and tried to create a function to get the .nextval from the Oracle table but I am getting the following error -
Error while trying to retrieve text for error ORA-01019
I realized that manually setting the value via the package i.e. by using the Script task to enumerate the values but is not incrementing the sequence and that is causing the problem. How do we deal with it? Any links that can help me solve it?
I am using SSIS-2014 but I am not able to tag it as I don't due to paucity of reputation points.
I created a workaround to cater to this problem. I have created staging tables of the destination without the column that takes the Sequence Id. After the data gets inserted, I am then calling SQL statement to get the data into the main tables from staging table and using the .nextval function. Finally truncating/dropping the table depending on the need. It would still be interesting to know how this same thing can be handled via script rather having this workaround.
For instance something like below -
insert into table_main
select table_main_sequence_name.nextval
,*
from (
select *
from table_stg
)
ORA-01019 may be related to fact you have multiple Oracle clients installed. Please check ORACLE_HOME variable if it contains only one client.
One workaround I'm thinking about is creating two procedures for handling sequence. One to get value you start with:
create or replace function get_first from seq as return number
seqid number;
begin
select seq_name.nexval into seqid from dual;
return seqid;
end;
/
Then do your incrementation in script. And after that call second procedure to increment sequence:
create or replace procedure setseq(val number) as
begin
execute immediate 'ALTER SEQUENCE seq_name INCREMENT BY ' || val;
end;
/
This is not good approach but maybe it will solve your problem

Hung up on For In Loop and variable for external table loader

I have to load a number of files every day into our database system. My solution was to use a java procedure to generate a table of all the files in the directory folder and loop through each of them through the external table loader. I'm running into two hangups with this
Declare
what_to_load VARCHAR2(255);
CURSOR folder_contents
IS
select filename
from database.DIR_LIST
where filename like 'DCOpenOrders_%'
and filename like '%.csv';
BEGIN
DELETE FROM database.DIR_LIST;
database.GET_DIR_LIST( 'directory_path_files_are_in' );
FOR each_record IN folder_contents
LOOP
what_to_load := each_record.filename;
EXECUTE IMMEDIATE 'DROP table database.my_table';
execute immediate 'CREATE table database.my_table
(Region VARCHAR2(10),
District VARCHAR2(10),
Originating_Store VARCHAR2(80),
Order_Date VARCHAR2(30),
Ship_Location VARCHAR2(10),
Orig_Ord_No VARCHAR2(30),
Field_G VARCHAR2(30),
Line_No VARCHAR2(10),
POS_UPC VARCHAR2(30),
Item_Descr VARCHAR2(80),
Ord_Qty VARCHAR2(10),
Line_Status VARCHAR2(30),
Report_Date VARCHAR2(30),
Ship_Type VARCHAR2(30),
ERR_FLAG VARCHAR2(10),
ERR_LOG VARCHAR2(800)
)
ORGANIZATION EXTERNAL
( type oracle_loader
default directory WORK_DIR
access parameters
( records delimited by NEWLINE
skip 1
fields terminated by '',''
optionally enclosed by ''"''
missing FIELD VALUES are NULL)
location ('''||each_record.filename||''')
)
reject limit unlimited';
Execute Immediate 'Grant All on database.my_table to USER';
* merge statement goes here*
End Loop;
commit;
end;
Again, the idea is that every time this runs it will get the new list of csv files in the dir_list table with the java procedure get_dir_list, then for every file name I set as equal to the variable and use the variable in the external table loader to load up the file.
I'm running into [s]two[/s] problems
EDIT: Ok, making the corrections below to cursor row identification, now I hit the point where when I go to the second pass through my cursor appears to be wrong or missing - it will go through a loop just fine if the only action is to do a put_line. But with an execute immediate statement in there such as the "Grant All" then as soon as it completes one pass it throws ORA-08103 at the top of the loop and refuses to go on
3) I'm aware of an ask tom on this (https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:37593123416931) that says to use the alter table command. However when I try that it doesn't accept my attempt at that
execute immediate 'alter table database.my_table location('''||filename||''')';
throws out an error (plus I'd still need to get it to do another loop there to put the name of the current file into the external loader)
Any suggestions or help? I should note that we are on windows, not unix (since most solutions people offer on these places assume the latter) and I can't grab another program or module to do the job due to approval restrictions (since that seems to be another common solution)
Thanks!
For your first problem, your cursor loop variable is confusingly called filename. Your are referring to that record directly, instead of the column from the cursor. Changing the name slightly to make it a little clearer:
FOR filenames IN folder_contents
LOOP
what_to_load := filesnames.filename;
The rest is less obvious, but it isn't going to be happy that you're dropping and recreating the table in the middle of a block that refers to it statically. You need to make all references dynamic:
execute immediate 'Grant All on database.my_table ...';
-- grant to who/what? and why?
And your merge will have to be dynamic too. At least unless you can get the alter table to work, but you haven't said what the problem is with that. Actually, from what you posted, that's the same cursor variable reference problem:
execute immediate 'alter table database.my_table location('''||filenames.filename||''')';
If you aren't dropping/creating the table in the block, and create it once statically and just alter it, then you can use a static merge - just the alter needs to be dynamic.
A simpler approach might be to create the external table once, with a specific fixed name; loop through the list of real files; and for each of those in turn, rename or copy that to the fixed file name and perform the merge. Each time you query the external table it rereads the file anyway, so changing its contents in the background is OK. Dropping/recreating or even altering the table then wouldn't be necessary.
You could also, as that Ask Tom pst mentions, supply all the file names to the external table at once, as they have the same structure, either with the drop/create or with the alter approach.

Oracle Temporary Table to convert a Long Raw to Blob

Questions have been asked in the past that seems to handle pieces of my full question, but I'm not finding a totally good answer.
Here is the situation:
I'm importing data from an old, but operational and production, Oracle server.
One of the columns is created as LONG RAW.
I will not be able to convert the table to a BLOB.
I would like to use a global temporary table to pull out data each time I call to the server.
This feels like a good answer, from here: How to create a temporary table in Oracle
CREATE GLOBAL TEMPORARY TABLE newtable
ON COMMIT PRESERVE ROWS
AS SELECT
MYID, TO_LOB("LONGRAWDATA") "BLOBDATA"
FROM oldtable WHERE .....;
I do not want the table hanging around, and I'd only do a chunk of rows at a time, to pull out the old table in pieces, each time killing the table. Is it acceptable behavior to do the CREATE, then do the SELECT, then DROP?
Thanks..
--- EDIT ---
Just as a follow up, I decided to take an even different approach to this.
Branching the strong-oracle package, I was able to do what I originally hoped to do, which was to pull the data directly from the table without doing a conversion.
Here is the issue I've posted. If I am allowed to publish my code to a branch, I'll post a follow up here for completeness.
Oracle ODBC Driver Release 11.2.0.1.0 says that Prefetch for LONG RAW data types is supported, which is true.
One caveat is that LONG RAW can technically be up to 2GB in size. I had to set a hard max size of 10MB in the code, which is adequate for my use, so far at least. This probably could be a variable sent in to the connection.
This fix is a bit off original topic now however, but it might be useful to someone else.
With Oracle GTTs, it is not be necessary to drop and create each time, and you don't need to worry about data "hanging around." In fact, it's inadvisable to drop and re-create. The structure itself persists, but the data in it does not. The data only persists within each session. You can test this by opening up two separate clients, loading data with one, and you will notice it's not there in the second client.
In effect, each time you open a session, it's like you are reading a completely different table, which was just truncated.
If you want to empty the table within your stored procedure, you can always truncate it. Within a stored proc, you will need to execute immediate if you do this.
This is really handy, but it also can make debugging a bear if you are implementing GTTs through code.
Out of curiosity, why a chunk at a time and not the entire dataset? What kind of data volumes are you talking about?
-- EDIT --
Per our comments conversation, this is very raw and untested, but I hope it will give you an idea what I mean:
CREATE OR REPLACE PROCEDURE LOAD_DATA()
AS
TOTAL_ROWS number;
TABLE_ROWS number := 1;
ROWS_AT_A_TIME number := 100;
BEGIN
select count (*)
into TOTAL_ROWS
from oldtable;
WHILE TABLE_ROWS <= TOTAL_ROWS
LOOP
execute immediate 'truncate table MY_TEMP_TABLE';
insert into MY_TEMP_TABLE
SELECT
MYID, TO_LOB(LONGRAWDATA) as BLOBDATA
FROM oldtable
WHERE ROWNUM between TABLE_ROWS and TABLE_ROWS + ROWS_AT_A_TIME;
insert into MY_REMOTE_TABLE#MY_REMOTE_SERVER
select * from MY_TEMP_TABLE;
commit;
TABLE_ROWS := TABLE_ROWS + ROWS_AT_A_TIME;
END LOOP;
commit;
end LOAD_DATA;

Alternate method to global temp tables for Oracle Stored Procedure

I have read and understand that Oracle uses only global temp tables unlike MS SQL which allows #temp tables. The situation that I have would call for me to create hundreds of Global temp tables in order to complete the DB conversion I am working on from MS SQL to Oracle. I want to know if there is another method out there, within a Oracle Stored Procedure, other than creating all of these tables which will have to be maintained in the DB.
Thank You
" Most of the time the only thing the temp tables are used within a
stored proc and then truncated at the end. We do constant upgrades to
our applications and having them somewhat comparable ensures that when
a change is made in one version that it can be easily merged to the
other."
T-SQL Temp tables are essentially memory structures. They provide benefits in MSSQL which are less obvious in Oracle, because of differences in the two RDBMS architectures. So if you were looking to migrate then you would be well advised to take an approach more fitted to Oracle.
However, you have a different situation, and obviously keeping the two code bases in sync will make your life easier.
The closest thing to temporary tables as you want to use them are PL/SQL collections; specifically, nested tables.
There are a couple of ways of declaring these. The first is to use a SQL template - a cursor - and define a nested table type based on it. The second is to declare a record type and then define a nested table on that. In either case, populate the collection variable with a bulk operation.
declare
-- approach #1 - use a cursor
cursor c1 is
select *
from t23;
type nt1 is table of c1%rowtype;
recs1 nt1;
-- approach #1a - use a cursor with an explicit projection
cursor c1a is
select id, col_d, col_2
from t23;
type nt1a is table of c1a%rowtype;
recs1 nt1a;
-- approach #2 - use a PL/SQL record
type r2 is record (
my_id number
, some_date date
, a_string varchar2(30)
);
type nt2 is table of r2;
recs2 nt2;
begin
select *
bulk collect into recs1
from t23;
select id, col_d, col_2
bulk collect into recs2
from t23;
end;
/
Using a cursor offers the advantage of automatically reflecting changes in the underlying table(s). Although the RECORD provides the advantage of stability in the face of changes in the underlying table(s). It just depends what you want :)
There's a whole chapter in the PL/SQL reference manual. Read it to find out more.

Return REF CURSOR to procedure generated data

I need to write a sproc which performs some INSERTs on a table, and compile a list of "statuses" for each row based on how well the INSERT went. Each row will be inserted within a loop, the loop iterates over a cursor that supplies some values for the INSERT statement. What I need to return is a resultset which looks like this:
FIELDS_FROM_ROW_BEING_INSERTED.., STATUS VARCHAR2
The STATUS is determined by how the INSERT went. For instance, if the INSERT caused a DUP_VAL_ON_INDEX exception indicating there was a duplicate row, I'd set the STATUS to "Dupe". If all went well, I'd set it to "SUCCESS" and proceed to the next row.
By the end of it all, I'd have a resultset of N rows, where N is the number of insert statements performed and each row contains some identifying info for the row being inserted, along with the "STATUS" of the insertion
Since there is no table in my DB to store the values I'd like to pass back to the user, I'm wondering how I can return the info back? Temporary table? Seems in Oracle temporary tables are "global", not sure I would want a global table, are there any temporary tables that get dropped after a session is done?
If you are using Oracle 10gR2 or later then you should check out DML error logging. This basically does what you want to achieve, that is, it allows us to execute all the DML in a batch process by recording any errors and pressing on with the statements.
The principle is that we create an ERROR LOG table for each table we need to work with, using a PL/SQL built-in package DBMS_ERRLOG. Find out more. There is a simple extension to the DML syntax to log messages to the error log table. See an example here. This approach doesn't create any more objects than your proposal, and has the merit of using some standard Oracle functionality.
When working with bulk processing (that is, when using the FORALL syntax) we can trap exceptions using the built-in SQL%BULK_EXCEPTIONS collection. Check it out. It is possible to combine Bulk Exceptions with DML Error Logging but that may create problems in 11g. Find out more.
"Global" in the case of temporary tables just means they are permanent, it's the data which is temporary.
I would define a record type that matches your cursor, plus the status field. Then define a table of that type.
TYPE t_record IS
(
field_1,
...
field_n,
status VARCHAR2(30)
);
TYPE t_table IS TABLE OF t_record;
FUNCTION insert_records
(
p_rows_to_insert IN SYS_REFCURSOR
)
RETURN t_table;
Even better would be to also define the inputs as a table type instead of a cursor.

Resources