I have read and understand that Oracle uses only global temp tables unlike MS SQL which allows #temp tables. The situation that I have would call for me to create hundreds of Global temp tables in order to complete the DB conversion I am working on from MS SQL to Oracle. I want to know if there is another method out there, within a Oracle Stored Procedure, other than creating all of these tables which will have to be maintained in the DB.
Thank You
" Most of the time the only thing the temp tables are used within a
stored proc and then truncated at the end. We do constant upgrades to
our applications and having them somewhat comparable ensures that when
a change is made in one version that it can be easily merged to the
other."
T-SQL Temp tables are essentially memory structures. They provide benefits in MSSQL which are less obvious in Oracle, because of differences in the two RDBMS architectures. So if you were looking to migrate then you would be well advised to take an approach more fitted to Oracle.
However, you have a different situation, and obviously keeping the two code bases in sync will make your life easier.
The closest thing to temporary tables as you want to use them are PL/SQL collections; specifically, nested tables.
There are a couple of ways of declaring these. The first is to use a SQL template - a cursor - and define a nested table type based on it. The second is to declare a record type and then define a nested table on that. In either case, populate the collection variable with a bulk operation.
declare
-- approach #1 - use a cursor
cursor c1 is
select *
from t23;
type nt1 is table of c1%rowtype;
recs1 nt1;
-- approach #1a - use a cursor with an explicit projection
cursor c1a is
select id, col_d, col_2
from t23;
type nt1a is table of c1a%rowtype;
recs1 nt1a;
-- approach #2 - use a PL/SQL record
type r2 is record (
my_id number
, some_date date
, a_string varchar2(30)
);
type nt2 is table of r2;
recs2 nt2;
begin
select *
bulk collect into recs1
from t23;
select id, col_d, col_2
bulk collect into recs2
from t23;
end;
/
Using a cursor offers the advantage of automatically reflecting changes in the underlying table(s). Although the RECORD provides the advantage of stability in the face of changes in the underlying table(s). It just depends what you want :)
There's a whole chapter in the PL/SQL reference manual. Read it to find out more.
Related
First of all usually I am working with MSSQL. But I have a stored procedure in MSSQL, which I need to use in Oracle now and since I am absolutely new to Oracle I have no idea at all how to do it correct.
I needed to use user defined table types in my MS SQL stored procedure because I am using "logical" tables in my stored procedure, which I also need to pass them to a dynamic sql statement within this procedure (using column names of "physical" tables as variables/parameters).
I've started to add the oracle function in a package I made before for another function. It looks like
TYPE resultRec IS RECORD
(
[result columns]
);
TYPE resultTable IS TABLE OF resultRec;
Function MyFunctionName([A LOT PARAMETERS]) RETURN resultTable PIPELINED;
I also described the layout of the tables (the user defined table types in MSSQL), which I want to use within this function in this package header.
So far so good, but now I don't really know where I have to declare my table variables or user defined table types. I also tried to put them in the package header, but if I am trying to use these tables in the package body, where I am describing my function, Oracle tells met, that the table or view does not exist.
I also tried it to describe the tables within the package body or in the block of my function, which looks like that:
FUNCTION MyFunctionName
(
[MyParameters]
)
RETURN resultTable PIPELINED is rec resultrec;
TYPE tableVariableA IS TABLE OF tableRecA;
TYPE tableVariableB IS TABLE OF tableRecB;
BEGIN
INSERT INTO tableVariableA
SELECT ColumnA, ColumnB FROM physicalTable WHERE[...];
[A LOT MORE TO DO...]
END;
But in this case Oracle also tells me, that it doesn't know the table or view.
I also tried a few more things, but at the end I wasn't able to tell Oracle what table it should use...
I would appreciate every hint, which helps me to understand how oracle works in this case. Thanks a lot!
You can't insert into a collection (e.g. PL/SQL table). You can use the bulk collect syntax to populate the collection:
SELECT ColumnA, ColumnB
BULK COLLECT INTO tableVariableA
FROM physicalTable
WHERE [...];
However, you might want to check this is an appropriate approach, since SQL Server and Oracle differ quite a bit. You can't use PL/SQL tables in plain SQL (at least prior to 12c), even inside your procedure, so you might need a schema-level type rather than a PL/SQL type, but it depends what you will do next. You might not really want a collection at all. Trying to convert T-SQL straight to PL/SQL without understanding the differences could lead you down a wrong path - make sure you understand the actual requirement and then find the best Oracle mechanism for that.
I've been tasked with improving old PL/SQL and Oracle SQL legacy code. In all there are around 7000 lines of code! One aspect of the existing code that really surprises me is the previous coder needlessly created hundreds of lines of code by not writing any procedures or functions - instead the coder essentially repeats the same code throughout.
For example, in the existing code there are literally 40 or more repetitions of the following SQL:
CREATE TABLE tmp_clients
AS
SELECT * FROM live.clients;
CREATE TABLE tmp_customers
AS
SELECT * FROM live.customers;
CREATE TABLE tmp_suppliers
AS
SELECT * FROM live.suppliers WHERE type_id = 1;
and many, many more.....
I'm very new to writing in PL/SQL, though I have recently purchased the excellent book "Oracle PL/SQL programming" by Steven Feuerstein. However, as far as I can tell, I should be able to write a callable procedure such as:
procedure create_temp_table (new_table_nme in varchar(60)
source_table in varchar(60))
IS
s_query varchar2(100);
BEGIN
s_query := 'CREATE TABLE ' + new_table_nme + 'AS SELECT * FROM ' + source_table;
execute immediate s_query;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -955 THEN
NULL;
ELSE
RAISE;
END IF;
END;
I would then simply call the procedure as follows:
create_temp_table('tmp.clients', 'live.clients');
create_temp_table('tmp.customers', 'live.customers');
Is my proposed approach reasonable given the problem as stated?
Are the datatypes in the procedure call reasonable, ie should varchar2(60) be used, or is it possible to force the 'source_table' parameter to be a table name in the schema? What happens if the table name is more than 60 characters?
I want to be able to pass a third non-required parameter in cases where the data has to be restricted in a trivial way, ie to deal with cases "WHERE type_id = 1". How do I modify the procedure to include a parameter that is only used occasionally and how would I modify the rest of the code. I would probably add some sort of IF/ELSE statement to check whether the third parameter was not NULL and then construct the s_query accordingly.
How would I check that the table has actually been created successfully?
I want to trap for two other exceptions, namely
The new table (eg 'tmp.clients') already exists; and
The source table doesn't exist.
Does the EXCEPTION as written handle these cases?
More generally, from where can I obtain the SQL error codes and their meanings?
Any suggested improvements to the code would be gratefully received.
You could get rid of a lot of code (gradually!) by using GLOBAL temporary tables.
Execute immediate is not a bad practice but if there are other options then they should be used. Global temp tables are common where you want to extract and transform data but once processed you don't need it anymore until the next load. Each user can only see the data they insert and no redo logs are generated. You can index the data for faster querying if required.
Something like this
-- Create table
create global temporary table GT_CLIENTS
(
id NUMBER(10) not null,
Client_id NUMBER(10) not null,
modified_by_id NUMBER(10),
transaction_id NUMBER(10),
local_transaction_id VARCHAR2(30) not null,
last_modified_date_tz TIMESTAMP(6) WITH TIME ZONE not null
)
on commit preserve rows;
I recommend the on commit preserve rows option so that you can debug your procedure and see what went into the table.
Usage would be
INSERT INTO GT_CLIENTS
SELECT * FROM live.clients;
If this is the route you want to take to minimize changes, then the error for source table does not exist is -942 which you will want to stop for rather than continuing as your temp table would not have been created. Similarly, just continuing if you get an object already exists error will be problematic as you will not have reloaded it with the new data - the create failed so the table still has the data from the last run. So I would definitely do some more thinking about your exception handler.
That said, I also concur that this is generally not the best way to do things. Creating and dropping objects in a multi-user environment is a disaster in the making, and seems a silly waste of resources when there are more appropriate options available.
I have been googling for a while to find an alternative approach for returning a resultset rather than returning a ref cursor but failed to find it so. As i have done most of my development in sql server where we won't use cursors unless until it is necessary but i understand it differs from ref. cursor. But on top of that when we return a ref. cursor as an output from database it will become a connected architecture. So my dear Geeks can you answer/clear my confusions as mentioned below,
I want to understand which is the better way for returning a result set to our application (Ref. cursor or SELECT statement with all the joins or any other options)?
Is using ref. cursor is a connected /disconnected architecture?
Is using Select sql query is better for a disconnected approach?
Thanks in advance.
As usual, it depends on your needs.
Regarding connected /disconnected architecture it makes almost no difference. After your client application received the RefCursor and all rows are fetched (and preferably the cursor is closed) you can disconnect and reconnect from database the same way as using direct SELECT statement.
Consider following pseudo-code examples:
SELECT EMP_NAME, DEPT_NAME
FROM EMP
JOIN DEPT ON EMP_DEPT_ID = DEPT_ID
WHERE DEPT_ID = :d;
vs.
CREATE FUNCTION GetEmps(deptId IN NUMBER) RETURN SYS_REFCURSOR IS
res SYS_REFCURSOR;
BEGIN
OPEN res FOR
SELECT EMP_NAME, DEPT_NAME
FROM EMP
JOIN DEPT ON EMP_DEPT_ID = DEPT_ID
WHERE DEPT_ID = deptId;
RETURN res;
END;
My personal favorite is to prefer a RefCursors because:
With RefCursor client and server are more separated from each other, i.e. the client just has to know the function name and which attributes he receives (EMP_NAMEand DEPT_NAME), nothing else. He does not have to know the table names or any join condition. Developers of client and server can work more independently from each other.
With RefCursor you can implement fine-granularity security methods. You only have to do grant execute on GetEmps to ... and the client gets EMP_NAME and DEPT_NAME - no more, no less!
With direct SELECT you have to grant select on both tables, however then the client could also execute SELECT SALARY, EMP_NAME FROM EMP; for instance (unless you use the quite expensive VPD feature from Oracle). If you like you can log every single call of the function and add as many constraints as you like.
with RefCursor the client application (and not the server) can decide how many rows he like to fetch in order to optimize response times.
the basic difference is that when you are uncertain about the sql query you are going to execute at run time, then use ref cursor and you can use same ref cursor for different sql queries. for example
for query
select name,id,dob from table where id=:v_id;
here v_id is passed at run time, so we can use ref cursor. v_id varies at run time.
While the select statement is used in for each loop when we have small set of records. When we have large set of records we can use bulk collect with ref cursor and limit rows.
In Oracle, all DML statements are cursors. A cursor is simply a set of instructions used by the optimizer to get to the data.
In my Oracle-centric opinion, everything that is done in the database should be done in stored procedures, so that you have a central interface into the database - ie. if you had two applications updating/retrieving data to/from the same database, you'd only need one set of stored procs on the database, rather than replicating the logic twice in each application.
With that in mind, ref cursors are the way to go.
However, if you're not going down that route (boo! *{;-) ) there is little difference between a ref cursor and a straight select statement in terms of getting the data to the front end. How you call for the data will be different, of course, but it's the same thing to the database.
All a ref cursor does is basically hold a pointer to a cursor.
It appears as though there's an implementation restriction that forbids the use of forall .. insert on Oracle, when used over a database link. This is a simple example to demonstrate:
connect schema/password#db1
create table tmp_ben_test (
a number
, b number
, c date
, constraint pk_tmp_ben_test primary key (a, b)
);
Table created.
connect schema/password#db2
Connected.
declare
type r_test is record ( a number, b number, c date);
type t__test is table of r_test index by binary_integer;
t_test t__test;
cursor c_test is
select 1, level, sysdate
from dual
connect by level <= 10
;
begin
open c_test;
fetch c_test bulk collect into t_test;
forall i in t_test.first .. t_test.last
insert into tmp_ben_test#db1
values t_test(i)
;
close c_test;
end;
/
Very confusingly this fails in 9i with the following error:
ERROR at line 1: ORA-01400: cannot insert NULL into
("SCHEMA"."TMP_BEN_TEST"."A") ORA-02063: preceding line from DB1
ORA-06512: at line 18
If was only after checking in 11g that I realised this was an implementation restriction.
ERROR at line 18: ORA-06550: line 18, column 4: PLS-00739: FORALL
INSERT/UPDATE/DELETE not supported on remote tables
The really obvious way round this is to change forall .. to:
for i in t_test.first .. t_test.last loop
insert into tmp_ben_test#db1
values t_test(i);
end loop;
but, I'd rather keep it down to a single insert if at all possible. Tom Kyte suggests the use of a global temporary table. Inserting the data into a GTT and then over a DB link seems like massive overkill for a set of data that is already in a user-defined type.
Just to clarify this example is extremely simplistic compared to what is actually happening. There is no way we will be able to do a simple insert into and there is no way all the operations could be done on a GTT. Large parts of the code have to be done in user-defined type.
Is there another, simpler or less DMLy, way around this restriction?
What restrictions do you face on the remote database? If you can create objects there you have a workaround: on the remote database create the collection type and a procedure which takes the collection as a parameter and executes the FORALL statement.
If you create the t__test/r_test type in db2 and then create a public synonym for them on db1 then you should be able to call a procedure from db1 to db2 filling the t_table and returning in to db1. Then you should be able to insert into your local table.
I'm assuming you would use packaged types and procedures in the real world, not anonymous blocks.
Also, it would not be the ideal solution for big datasets, then GTT or similar would be better.
I have some trouble when trying to update a table by looping cursor which select from source table through dblink.
I have two database DB1, DB2.
They are two different database instance.
And I am using this following statement in DB1:
CURSOR TestCursor IS
SELECT a.*, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2 a;
BEGIN
For C1 in TestCursor loop
INSERT into RPT.TARGET
(
/*The company_name and cust_id are select from SOURCE table from DB2*/
COMPANY_NAME, CUST_ID, TEST_COL_A, TEST_COL_B
)
values
(
C1.COMPANY_NAME, C1.CUST_ID, C1.TEST_COL_A , C1.TEST_COL_B
) ;
End loop;
/*Some code...*/
End
Everything works fine until I add a column "NEW_COL" to SOURCE table#DB2
The insert data got the wrong value.
The value of TEST_COL_A , as I expect, should be 'A'.
However, it contains the value of NEW_COL which i add at SOURCE table.
And the value of TEST_COL_B contains 'A'.
Have anyone encounter the same issue?
It seems like oracle cache the table columns when it compile.
Is there any way to add a column to source table without recompile?
According to this:
Oracle Database does not manage
dependencies among remote schema
objects other than
local-procedure-to-remote-procedure
dependencies.
For example, assume that a local view
is created and defined by a query that
references a remote table. Also assume
that a local procedure includes a SQL
statement that references the same
remote table. Later, the definition of
the table is altered.
Therefore, the local view and
procedure are never invalidated, even
if the view or procedure is used after
the table is altered, and even if the
view or procedure now returns errors
when used. In this case, the view or
procedure must be altered manually so
that errors are not returned. In such
cases, lack of dependency management
is preferable to unnecessary
recompilations of dependent objects.
In this case you aren't quite seeing errors, but the cause is the same. You also wouldn't have a problem if you used explicit column names instead of *, which is usually safer anyway. If you're using * you can't avoid recompiling (unless, I suppose, the * is the last item in the select list, in which case any extra columns on the end wouldn't cause a problem - as long as their names didn't clash).
I recommend that you use a single set processing insert statement in DB1 rather than a row at a time cursor for loop for the insert, for example:
INSERT into RPT.TARGET
select COMPANY_NAME, CUST_ID, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2
;
Rationale:
Set processing will almost always out perform Row-at-a-time
processing [which is really slow-at-a-time processing].
Set processing the insert is a scalable solution. If the application will need to scale to tens of thousands of rows or millions of rows, the row-at-a-time solution will not likely scale.
Also, using the select * construct is dangerous for the reason you
encountered [and other similar reasons].