In Oracle, I have a table users. Within this table, for every user, I want to be able to store an array of attributes of a user, that is, key-value pairs.
I could create an extra table with three columns—the foreign key pointing to the users table, the key and the value—but I would prefer keeping things simple and don't want to create an additional table for that.
I know that Oracle has varrays, created like this:
CREATE OR REPLACE TYPE example AS VARRAY(20) OF NVARCHAR2(500);
How do I do the same thing, but instead of NVARCHAR2, I would have a tuple (NVARCHAR2(50), NVARCHAR2(200))?
All the resources about Oracle tuples point to either a grouping of columns to be used in a WHERE ... IN clause, or Java-related documentation.
What if you use record ... and not tuple?
DECLARE
TYPE t_myrecord IS RECORD ( field1 varchar2(50), field2 varchar2(200));
TYPE t_myarray IS VARRAY(20) OF t_myrecord;
a_myrecord t_myrecord;
a_myarray t_myarray := t_myarray();
BEGIN
a_myrecord.field1 := 'a';
a_myrecord.field2 := 'b';
a_myarray.extend(19);
a_myarray(1) := a_myrecord;
END;
Related
Why to I asked this question?
I have a table which as key that have a lot of field. Every time, I'm making a jointure, I miss a field. Therefore I have defined a pipelined function that take the key as an argument so that I am sure that I get only one element when I'm doing a jointure.
But the query take more time now. The table a has an index on some fields but not the table type used by pipelined function. I would like to know if it is possible to created a index on some fields of the table%rowtype
code:
create table a ( a1 integer);
create package p_a
as
type t_a iS TABLE of a%ROWTYPE;
function f(i_a1 integer) return t_a pipelined;
end;
CREATE PACKAGE BODY p_a
AS
CURSOR c_A (i_a1 INTEGER)
RETURN a%ROWTYPE
IS
SELECT t.*
FROM a t
WHERE t.a1 = i_a1;
FUNCTION f (i_a1 INTEGER)
RETURN t_a
PIPELINED
IS
BEGIN
FOR c IN c_a (i_a1)
LOOP
PIPE ROW (c);
END LOOP;
END;
END;
with b as( select 1 b1 from dual) select * from b cross apply (table(p_a.f(b.b1)));
the question
I've tried to index the type table by a field of a table like this
create table a ( a1 integer);
create package p_a2
as
type t_a iS TABLE of a%ROWTYPE index by a.a1%type;
function f(i_a1 integer) return t_a pipelined;
end;
PLS-00315: Implementation restriction: unsupported table index type
Is what I want to do possible. If not how to solve the performance problems mentioned in the introduction?
code
A TYPE is NOT a table and cannot be indexed.
When you do:
create package p_a
as
type t_a iS TABLE of a%ROWTYPE;
end;
/
You are defining a type and the type is a collection data type; an instance of that type is NOT a physical table and but is more like an in-memory array.
When you create a PIPELINED function:
function f(i_a1 integer) return t_a pipelined;
It does NOT return a table; it returns the collection data type.
When you do:
type t_a iS TABLE of a%ROWTYPE index by a.a1%type;
You are NOT creating an index on a table; you are changing to a different collection data type that is an associative array (like a JavaScript object or a Python dictionary) that stores key-value pairs.
An associative array is a PL/SQL data type and (with limited exceptions in later versions for insert, update and delete statements) cannot be used in SQL statements.
When you do:
SELECT * FROM TABLE(SYS.ODCIVARCHAR2LIST('a', 'b', 'c'));
or:
SELECT * FROM TABLE(p_a.f(1));
Then you are passing a collection data type to an SQL statement and the table collection expression TABLE() is treating the collection expression as if it was a table. It is still NOT a table.
If you want to use an index on the table then use the table (without a cursor or a pipeline function):
WITH b (b1) AS (
SELECT 1 FROM DUAL
)
SELECT *
FROM b
CROSS APPLY (
SELECT a.*
FROM a
WHERE a.a1 = b.b1;
);
I think the first line of your question says it all: "key that have a lot of field". If I understand correctly, the table has a primary key that consists of a large number of columns and because of that writing queries becomes a challenge.
It sounds like you're trying to do something pretty complex that should not be an issue at all.
Take a step back and ask yourself - does this need to be the primary key of the table ? Or can you use a surrogate key (identity column, sequence), use that as the primary key and just create a unique index on the set of field that currently make up the primary key. It will (1) simplify your data model and (2) make writing the queries a lot easier.
I am new to plsql programming. I want to create a list of in-memory data-structure which holds list of records like this
name, city, phone
john, New York, +1-88686
john, London, +44-5343
john, Hong Kong, +33-6556565
I want to do something like this
create table EmployeeTab(
name varchar2(20),
city varchar2(20),
phone varchar(20)
);
But, I could not find the correct syntax. I am using Oracle 11g.
If you want you can use a TYPE RECORD.
Example:
type EmployeeTab is record
(name varchar(20),
city varchar(20),
phone varchar(20));
v_EmployeeTab EmployeeTab;
Then you can save yuor data in this way:
v_EmployeeTab.name := '...';
v_EmployeeTab.city := '...';
v_EmployeeTab.phone := '...';
I suggest you to visit:
https://www.tutorialspoint.com/plsql/plsql_records.htm
You can use Index by table
If you want in memory data structure for a particular table then you define like below
TYPE <typename> IS TABLE OF <tablename>%rowtype INDEX BY binary_integer;
<variablename> <typename>; -- declare variable for that type
If you want index by table for your record then you can use recordname%rowtype
if you want index by table for your cursor then you can use cursorname%rowtype
Index by table will be useful to pass set of values to another procedure
I have to create a trigger for a table with many columns and I want to now if is any possibility to avoid using the name of the column after :new and :old. Instead of specifically use the column name I want to use the element from collection with column names of target table (the table on which the trigger is set).
The line 25 is that with the binding error:
DBMS_OUTPUT.PUT_LINE('Updating customer id'||col_name(i)||to_char(:new.col_name(i)));
Bellow you can see my trigger:
CREATE OR REPLACE TRIGGER TEST_TRG BEFORE
INSERT OR
UPDATE ON ITEMS REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW DECLARE TYPE col_list IS TABLE OF VARCHAR2(60);
col_name col_list := col_list();
total INTEGER;
counter INTEGER :=0;
BEGIN
SELECT COUNT(*)
INTO total
FROM user_tab_columns
WHERE table_name = 'ITEMS';
FOR rec IN
(SELECT column_name FROM user_tab_columns WHERE table_name = 'ITEMS'
)
LOOP
col_name.extend;
counter :=counter+1;
col_name(counter) := rec.column_name;
dbms_output.put_line(col_name(counter));
END LOOP;
dbms_output.put_line(TO_CHAR(total));
FOR i IN 1 .. col_name.count
LOOP
IF UPDATING(col_name(i)) THEN
DBMS_OUTPUT.PUT_LINE('Updating customer id'||col_name(i)||to_char(:new.col_name(i)));
END IF;
END LOOP;
END;
Sincerely,
After digging more I have found that is not possible to dynamically reference the :new.column_name or :old.column_name values in a trigger. Due to this I will use my code only to INSERT (it does not have an old value :-() and I will do some code in java to generate UPDATE statements.
I must refine my previous answer based on what has been said by Justin Cave and also my findings. We can create a dynamic list of values triggered by INSERTING and UPDATING, based on referencing clause (old and new). For example I have created 2 collections of type nested table with varchars. One collection will contain all column tabs, as strings, that I will use for auditing and another collection will contains values for that columns with binding reference (ex. :new.). After INSERTING predicate I have created a index by collection (an associative array) of strings with ID taken from list of strings with column tab name and the value taken from the list of values for that columns referenced by new. Due to the index by collection you have a full working dynamic list at your disposal. Good luck :-)
Pl/SQL:
Intent: My intent was to access employee tuple object defied as cursor below by using key as the employee_id.
Problem: I created a cursor - *l_employees_cur* and want to create type table as below type *l_employees_t*, as below but the compiler is complaining saying that PLS-00315 implementation restriction unsupported table index type.
CURSOR l_employees_cur
IS
SELECT employee_id,manager_id,first_name,last_name FROM employees;
type l_employees_t
IS
TABLE OF l_employees_cur%rowtype INDEX BY employees.employee_id%TYPE;
The definition of employees.employee_id is:
EMPLOYEE_ID NUMBER(6) NOT NULL
why can't I do this ? or Am I doint something wrong.
From the Oracle Documenation:
Associative Arrays
An associative array (formerly called PL/SQL table or index-by table) is a set of key-value pairs. Each key is a unique index, used to locate the associated value with the syntax variable_name(index).
The data type of index can be either a string type or PLS_INTEGER. Indexes are stored in sort order, not creation order. For string types, sort order is determined by the initialization parameters NLS_SORT and NLS_COMP.
I think that your mistake is the declaration of the plsql table.
Why don't you try the next one:
type l_employees_t
IS
TABLE OF l_employees_cur%rowtype INDEX BY pls_integer;
I also have a question for you:
What is the meaning of EMPLOYEE_ID NOT NULL NUMBER(6) in your code above?
Greetings
Carlos
Storing and Retreiving SQL Query Output in a PL/SQL Collection
The example in the OP looks a lot like Oracle's new sample HR data schema. (For those old-timers who know, the successor to the SCOTT-TIGER data model). This solution was developed on an Oracle 11g R2 instance.
The Demo Table Design - EMP
Demonstration Objectives
This example will show how to create a PL/SQL collection from an object TYPE definition. The complex data type is derived from the following cursor definition:
CURSOR l_employees_cur IS
SELECT emp.empno as EMPLOYEE_ID, emp.mgr as MANAGER_ID, emp.ename as LAST_NAME
FROM EMP;
After loading the cursor contents into an index-by collection variable, the last half of the stored procedure contains an optional step which loops back through the collection and displays the data either through DBMS_OUTPUT or an INSERT DML operation on another table.
Stored Procedure Example Source Code
This is the stored procedure used to query the demonstration table, EMP.
create or replace procedure zz_proc_employee is
CURSOR l_employees_cur IS
SELECT emp.empno as EMPLOYEE_ID, emp.mgr as MANAGER_ID, emp.ename as LAST_NAME
FROM EMP;
TYPE employees_tbl_type IS TABLE OF l_employees_cur%ROWTYPE INDEX BY PLS_INTEGER;
employees_rec_var l_employees_cur%ROWTYPE;
employees_tbl_var employees_tbl_type;
v_output_string varchar2(80);
c_output_template constant varchar2(80):=
'Employee: <<EMP>>; Manager: <<MGR>>; Employee Name: <<ENAME>>';
idx integer;
outloop integer;
BEGIN
idx:= 1;
OPEN l_employees_cur;
FETCH l_employees_cur INTO employees_rec_var;
WHILE l_employees_cur%FOUND LOOP
employees_tbl_var(idx):= employees_rec_var;
FETCH l_employees_cur INTO employees_rec_var;
idx:= idx + 1;
END LOOP;
CLOSE l_employees_cur;
-- OPTIONAL (below) Output Loop for Displaying The Array Contents
-- At this point, employees_tbl_var can be handed off or returned
-- for additional processing.
FOR outloop IN 1 .. idx LOOP
-- Build the output string:
v_output_string:= replace(c_output_template, '<<EMP>>',
to_char(employees_tbl_var(outloop).employee_id));
v_output_string:= replace(v_output_string, '<<MGR>>',
to_char(employees_tbl_var(outloop).manager_id));
v_output_string:= replace(v_output_string, '<<ENAME>>',
employees_tbl_var(outloop).last_name);
-- dbms_output.put_line(v_output_string);
INSERT INTO zz_output(output_string, output_ts)
VALUES(v_output_string, sysdate);
COMMIT;
END LOOP;
END zz_proc_employee;
I commented out the dbms_output call due to problems with the configuration of my server beyond my control. The alternate insert command to a output table is a quick way of visually verifying that the data from the EMP table found its way successfully into the declared collection variable.
Results and Discussion of the Solution
Here is my output after calling the procedure and querying my output table:
While the actual purpose behind the access to this table isn't clear in the very terse detail of the OP, I assumed that the first approach was an attempt to understand the use of collections and custom data types for efficient data extraction and handling from structures such as PL/SQL cursors.
The portion of this example procedure is very reusable, and the initial steps represent a working way of making and loading PL/SQL collections. If you notice, even if your own version of this EMP table is different, the only place that requires redefinition is the cursor itself.
Working with types, arrays, nested tables and other collection types will actually simplify work in the long run because of their dynamic nature.
What is the most per-formant manner to insert data into an object table?
Create Type R_Emp Is Object
(Emp_Id number, Last_Name varchar(50));
create type T_Emp is table of R_Emp;
Then given an inbound array insert values:
V_T_Emp T_Emp := T_Emp();
For i In 1..p_array.COUNT
Loop
..... // Best way to load values
Where is the data coming from? Assuming the data is coming from one or more tables
SELECT r_emp( emp_id, last_name )
BULK COLLECT INTO v_t_emp
FROM table_name
If the data is coming from somewhere else, you'll need to tell us where the data is coming from. If p_array and v_t_emp are both collections of type t_emp, you can just directly assign them
v_t_emp := p_array;
But if you're just copying the data from one collection to another identically defined collection, there normally wouldn't be any reason to introduce the second collection.