I was using flat stored procedures (flat means not contained in objects) in oracle to update my tables. For example I have a table Person with columns Id, FirstName, LastName, Address, Salary. I made a flat procedure Person_UpdFirstName and this procedure has two parameters: Id, FirstName. Inside the procedure, I find the row in the Person table that matches with the parameter Id and update the FirstName with the parameter FirstName. Usual stuff, nothing new.
Now, I am using oracle objects. I have an object PersonType, this is a udt. This object has same fields as columns in the table Person. I have put all of the procedures related to the Person table inside the PersonType object, that is, instead of using flat procedures I start using member procedures. None of the member procedures has any parameter, they take values from the fields of the object. For example, in the case of Person_UpdFirstName flat procedure, now I have a member procedure UpdFirstName. This member procedure do not take any parameter, it uses the Id and FirstName fields of the object itself, and update the Person table as before.
The problem is, when I was using flat procedures, I was passing parameters such as Id, FirstName, so in a large system with hundreds of tables, I cannot make a mistake in passing parameters to the stored procedure because number and type of parameters in each stored procedure is fixed. Now that I am using objects, I have to remember what fields of the object to be filled, there is no built in check in the system. This is fine as long as the fields in the table Person are non-nullable because it would throw an exception anyways but if the fields in the table are nullable, or when I am comparing values, then I can have lots of logical errors.
My question is, is there some built-in way to close this door of potential errors. I have some rough solutions but not sure:
Some kind of partial objects. My member methods should be forced to take in parameter those partial objects. For example I have a partial object PersonUpdFirstNameType, this has only one field FirstName and then my UpdFirstName member method take this as a parameter. Ofcourse its cumbersome to make a separate partial-type for each operation on a table. I don't really like this solutionI don't pass objects from c# to oracle procedures, instead I pass variables in parameters and then manually build (or not build) oracle objects as needed.
I have found out a way to map oracle objects with c# classes. For this, I don't have to use any ORM tool. I just have to add up a few attributes on c# classes and c# fields of those classes and implement a few interfaces. So, I can actually pass c# objects to oracle procedures and use "." syntax in oracle procedures to access the fields which contains the actual data.
I think the problem I am asking is a general oop problem, so its not specific to any particular language. The general problem, is suppose you have a class C with fields F1, F2, F3, F4, F5 and methods M1, M2, M3. M1 do some operation on some of the fields, M2 do some operation on some other fields, M3 do some operations on some fields which may also be acted upon by M1 or M2. A client code is making objects of C and can fill any number (including zero) of the parameters before calling any method. What if the client code call a method before putting values in the fields required by the method. In C#, I think this is handled by compiler by throwing exceptions if you not initialize the fields first; you can also leave the fields empty in definition of class such as "int i" without putting any value in i so that if a method is called now compiler throw an exception. There is no such support in dbms because of the nullable fields. If lets say you are comparing Id of a table-row against Id of an object-field and you forgot to put any value in the object-field then the table-row's id is compared against null and no row is matched and therefore no update happens (suppose you want to update rows which match the id, usual update operation).
I just want to know if there is some built-in check in the system to handle such cases.
I don't have any idea how to have an Error in compilation time, and I don't know of any other OO language that gives such a feature either (how can the compiler tell when or where the attribute was initiated ?)
What you can do is have Exceptions in runtime (somewhat like NullPointerException or ArgumentNullException).
For example:
create or replace type person_o as object
(
id number,
fname varchar2(32),
lname varchar2(32),
member procedure update_lname
);
/
create or replace type body person_o is
member procedure update_lname is
begin
if self.lname is null then
Raise_application_error(-20000, 'null attribute');
end if;
update persons_table set last_name = self.lname where id = self.id;
commit;
end;
end;
/
Related
I am new to Oracle PL/SQL. I have found this package and it is called from a trigger. I just cannot figure exactly what this simple looking package code is doing.
It is called from a trigger as below:
IF INSERTING THEN
i := STATE_PKG_OVERRIDE_CN.AffectedRows.COUNT+1;
STATE_PKG_OVERRIDE_CN.AffectedRows(i).IDU := :new.IDU;
STATE_PKG_OVERRIDE_CN.AffectedRows(i).cn := :new.cn;
This is the package. Can somebody please explain the basics of what is it doing? does it return a value? change a value? what is AffectedRows RIDARRAY or EMPTY ?
create or replace PACKAGE STATE_PKG_OVERRIDE_CN
AS
TYPE rowid_cn IS RECORD
(
idu dirxml.USR.IDU%TYPE,
cn dirxml.USR.CN%TYPE
);
TYPE RIDARRAY IS TABLE OF rowid_cn INDEX BY BINARY_INTEGER;
AffectedRows RIDARRAY;
EMPTY RIDARRAY;
END;
I have googled EMPTY but found nothing, i believe it is creating a table of type record. The trigger is passing in a value of cn or IDU am i am familiar with these two values. But what is the package doing? or returning ? im confused.
Cheers
This is a bespoke package belonging to your organisation. (That's why Google wasn't helpful for you.) We cannot tell you for sure what it does or how it's used. But we can guess.
The package has no procedures or functions, it just defines array variables of a bespoke type, which can be used by other program units such as triggers. The trigger you posted assigns values to the array AffectedRows. Presumably this trigger fires FOR EACH ROW. Likely there is another trigger on the same table firing AFTER STATEMENT which reads that array and does some processing, then assigns AffectedRows := EMPTY to reset the array.
The purpose of this infrastructure is to pass state across trigger actions. A common reason for doing this is work around a mutating table exception. They are risky because state cannot be guaranteed; for instance if a insert fails before the AFTER STATEMENT trigger fires the AffectedRows array is not re-initialised, so subsequent processing will be incorrect (or will fail).
Since 11g Oracle provides compound triggers which remove the need for this sort of package. Find out more.
To investigate further, first you want to check USER_TRIGGERS to find other triggers on the table which owns the trigger you mentioned. If that doesn't help or you want to see whether other tables also use this package run this query:
select *
from user_dependencies
where referenced_type = 'PACKAGE'
and referenced_name = 'STATE_PKG_OVERRIDE_CN'
It begins with the package which
declares a type ROWID_CN as a record that contains two values: IDU and CN
syntax requires to create another type (RIDARRAY) which is based on previously declared ROWID_CN
affectedrows and empty are arrays whose type is RIDARRAY. Basically, you can imagine them as a table that has two columns, IDU and CN
Now, a trigger: piece of code you posted says that those 3 lines are executed when someone inserts a row into a table the trigger is based on. For example,
create or replace trigger trg_biu_emp
before insert or update on emp
for each row
declare
i number;
begin
if inserting then ...
Apparently, there's some code that is executed when updating, or even deleting rows.
Anyway:
i := ... line counts number of elements in affectedrows array (which is declared in the package) and adds 1 to that number. For example, if there were 3 elements, i would be 4.
...affectedrows(i).idu := :new.idu enters a new row into the array. Its ordinal number is i (4 in our example). As you're inserting a row into the table, trigger knows IDU column's :new value and puts it into the array. For example, if you used
insert into emp (idu, cn) values (100, 'A')
then affectedrows(4).idu = 100, while affectedrows(4).cn = 'A'
It is, probably, something similar with EMPTY array. Google can't return anything useful, it is just a custom-made array.
It is my understanding that you cannot use a collection in a where clause unless it is defined at the DB level. I have a distinct dislike for random type definitions laying about a schema. It's a religious thing so don't try to dissuade me.
Types contained within a package are cool, because they are easily found and are related to the work at hand. So having said that I have a package that defines a structure (currently a table type collection) that looks like;
TYPE WORD_LIST_ROW IS RECORD(
WORD VARCHAR(255));
TYPE WORD_LIST IS TABLE OF WORD_LIST_ROW;
There is a routine in the package that instantiates and populates an instance of this. It would be useful to be able to use the instantiated object, or some analog therof in a where clause.
So being the clever (or so I thought) programmer, I said why don't I just create a pipelined function to make a table from the collection which I did, and it looks like;
FUNCTION WORD_LIST_TABLE(IN_WORD_LIST WORD_LIST) RETURN WORD_LIST PIPELINED
AS
OUT_WORD_LIST WORD_LIST := WORD_LIST();
BEGIN
FOR I IN 1 .. IN_WORD_LIST.COUNT
LOOP
PIPE ROW(IN_WORD_LIST(I));
END LOOP;
RETURN;
END WORD_LIST_TABLE;
Then in another routine I call the function that builds the collection, finally I use a pipelined function that uses the collection as input in a cursor's where clause.
sort of like this;
cursor xyz
is
select * from x-stuff where fieldA in (select word from table(word_list_table(temp_word_list));
In the loop for the cursor I get an oracle error ora-21700 object does not exist or is marked for delete.
Is there any easy way to build an oracle object that can be used in an Oracle where clause? Basically what I would like to do is;
select * from whatever where fielda in myobject;
The solution is simple - declare the type at schema level using CREATE TYPE statement and you will be able to use your collections in your SQL statements in PL/SQL blocks.
If you have declared your TYPE inside a PL/SQL package you cannot use it in your queries inside PL/SQL blocks.
Also, you must keep in mind that only varray and nested table type collections can be used in queries as of Oracle 11.2 and you cannot use associative arrays in queries.. In 12c you don't have these restrictions.
For further reference go to Oracle Docs.
i want to retrieve type of elements varray stores through type attribute or ANY work around.
for example our type is defined like this
CREATE TYPE "READINGS" AS VARRAY (200) OF NUMBER(21, 6);
(readings is varray with elements of type number(21,6))
READINGS is a column in a table INTERVALS. INTERVALS is a central table and we have batch processes on INTERVALS which execute sql store procedures. In store procedure we have hard coded variable declarations mapping to the READING VArray type element type which is NUMBER(21, 6) for example the store procedure has variable declarations like
CONSUMPTION NUMBER(21, 6);
whenever Varray definition is changed or varray is dropped and recreated with different size and precision, ex instead on number(21,6) is changed to number(25,9) we need to change our variable declarations in all batch process store procedures.
All i am looking for is making CONSUMPTION variable declaration, refer to element type of VArray. I want something like this
CONSUMPTION INTERVALS.READINGS.COLUMN_TYPE%TYPE;
(i want some thing like this, refer to the type of elements stored by varray)
Why are you creating a table with a VARRAY column in the first place? It would generally make far more sense to create a separate table for READINGS with a foreign key that lets you relate the rows back to the INTERVALS table. You could then easily enough declare columns of type READINGS.COLUMN_NAME%TYPE.
Collections are wildly useful in PL/SQL. I've never seen a case where they improved on a standard normalized approach to data modeling. I have seen multiple cases where incorporating collections into your data model ends up making your data model harder to work with and your code harder to write and maintain.
If you don't want to fix the data model, you can
Declare a SUBTYPE or a packaged variable of type NUMBER(21, 6) that you use as the type for your variable declarations. You'll have to change this definition if and when you change the declaration of the VARRAY type.
Create an object type with a single attribute (a NUMBER(21,6)) and define the VARRAY based on that object type. Then you can declare instances of the object type in your code.
This is not a required solution, but you can get string of type definition for further using it in dynamic SQL
SELECT
regexp_substr(text, 'VARRAY.*?OF\s+(.+?)(;|\s)*$', 1, 1, 'i', 1)
FROM user_source
WHERE name = 'READINGS'
I have a stored procedure made in Oracle 9g that returns a cursor with different columns depending on a parameter, its something like this:
CREATE OR REPLACE PROCEDURE ASCHEMA.SP_TWOCURSORS
(
aParam NUMBER,
P_RETURN OUT SYS_REFCURSOR
)
IS
BEGIN
IF aParam = 1 THEN
OPEN P_RETURN FOR
SELECT
a.column1, (number)
a.column2 (varchar2)
FROM
table1 a;
ELSE
OPEN P_RETURN FOR
SELECT
b.column1, (varchar2)
b.column2, (number)
b.column3 (number)
FROM
table1 b;
END IF;
END;
I have to consume this procedure in PowerBuilder and pass the returned data to a DataWindow1 or DataWindow2, depending on the returned cursor, these datawindows are filled in runtime by the execution of another procedures coming from other source. I can't modify the database objects (like split the sp in two), just the PowerBuilder code. My problem is how to handle this scenario in an elegant way. I have some ideas but don't know if it will work:
Create a DataWindow object that handles every column involved in both cursors returned from the sp, then copy each row to the expected DataWindow.
Create a DataStore and pass the sp with the Create method, then copy the rows in the expected DataWindow.
Execute the procedure dynamically, fetch every row and add each result into a new row of the expected DataWindow.
I haven't tried the first one because there are many columns and it will take a long time to do. The second looks good but I don't know how to handle a DataStore with no DataWindow object and don't know if this is possible (1). The third is my last option to solve this problem. I want to ask people before start implementing this solution because I'm new to PowerBuilder, and even if I won't work on it too long I want to do it in the right way.
Thanks for the help.
(1) I have found this article about using Custom DataStores but I don't know if I can use only 1 DataStore or I should use 2. Also, for the Oracle connection I don't use SQLCA but another transaction object, so I don't know how to do this.
Keep It Simple.
You know the details of the stored proc. If you are calling this sp from PB, you are knowing its aParam already before the call. Why not defining 2 datawindows, one for each version of the results ?
Each DW would have a retrieval argument (the one that is passed to the stored proc) and will get its result from the sp.
At runtime, depending on the retrieval argument, and before retrieving the values, assign the corresponding dataobject to the datawindow object that is on : either the DW that suits the aParam = 1 or the DW that suits the else part.
I'm working in two different Oracle schemas on two different instances of Oracle. I've defined several types and type collections to transfer data between these schemas. The problem I'm running into is that even though the type have exactly the same definitions (same scripts used to create both sets in the schemas) Oracle sees them as different objects that are not interchangeable.
I thought about casting the incoming remote type object as the same local type but I get an error about referencing types across dblinks.
Essentially, I'm doing the following:
DECLARE
MyType LocalType; -- note, same definition as the RemoteType (same script)
BEGIN
REMOTE_SCHEMA.PACKAGE.PROCEDURE#DBLINK( MyType ); -- MyType is an OUT param
LOCAL_SCHEMA.PACKAGE.PROCEDURE( MyType ); -- IN param
END;
That fails because the REMOTE procedure call can't understand the MyType since it treats LocalType and RemoteType as different object types.
I tried DECLARING MyType as follows as well:
MyType REMOTE_SCHEMA.RemoteType#DBLINK;
but I get another error about referencing types across dblinks. CASTing between types doesn't work either because in order to cast, I need to reference the remote type across the dblink - same issue, same error. I've also tried using SYS.ANYDATA as the object that crosses between the two instance but it gets a similar error.
Any ideas?
UPDATE:
Tried declaring the object type on both sides of the DBLINK using the same OID (retrieved manually using SYS_OP_GUID()) but Oracle still "sees" the two objects as different and throws a "wrong number or types of arguements" error.
I have read the Oracle Documentation and it is not very difficult.
You need to add an OID to your type definitions in both databases.
You can use a GUID as OID.
SELECT SYS_OP_GUID() FROM DUAL;
SYS_OP_GUID()
--------------------------------
AE34B912631948F0B274D778A29F6C8C
Now create your UDT in both databases with the SAME OID.
create type testlinktype oid 'AE34B912631948F0B274D778A29F6C8C' as object
( v1 varchar2(10) , v2 varchar2(20) );
/
Now create a table:
create table testlink
( name testlinktype);
insert into testlink values (testlinktype ('RC','AB'));
commit;
Now you can select from the table via the dblink in the other database:
select * from testlink#to_ora10;
NAME(V1, V2)
--------------------------
TESTLINKTYPE('RC', 'AB')
If you get error ORA-21700 when you try to select via the dblink the first time, just reconnect.
I think the underlying issue is that Oracle doesn't know how to automatically serialize/deserialize your custom type over the wire, so to speak.
Your best bet is probably to pass an XML (or other) representation over the link.