I am trying to retrieve a customer id pk column from a report page to a form page that gets automatically incremented whenever the user moves from the report page to the form page using oracle application express.So i am trying to create a trigger that returnns a number having the value of the final row along with a +1 increment to it. However i get this error
Error at line 8: PLS-00103: Encountered the symbol "(" when expecting one of the following:
, from
6. INTO number
7. FROM (SELECT a.cust_id, max(cust_id) over() as max_pk FROM customer a)
8. WHERE cust_id = max_pk;
9. number:=(cust_id+1);
10. END;
This is my PL/SQL procedure.
CREATE OR REPLACE FUNCTION cust_id_incremental(cust_id IN number)
RETURN number;
BEGIN
SELECT cust_id
INTO number
FROM (SELECT a.cust_id, max(cust_id) over() as max_pk FROM customer a)
WHERE cust_id = max_pk;`enter code here`
number:=(cust_id+1);
END;
number is a reserve word, you need to call it something else like l_number to be a variable name.
You need to define a variable to contain the result, populate it, then return it. The corrected procedure might look like this:
CREATE OR REPLACE FUNCTION cust_id_incremental (cust_id IN NUMBER)
RETURN NUMBER IS
v_cust_id NUMBER;
BEGIN
SELECT cust_id
INTO v_cust_id
FROM (SELECT a.cust_id, MAX (cust_id) OVER () AS max_pk
FROM customer a)
WHERE cust_id = max_pk;
v_cust_id := v_cust_id + 1;
RETURN v_cust_id;
END;
Taking a second look, the structure of this procedure is far more convoluted than it needs to be. Unless I'm missing something, you could accomplish the same thing with a procedure that looked like this:
CREATE OR REPLACE FUNCTION cust_id_incremental
RETURN NUMBER IS
v_cust_id NUMBER;
BEGIN
SELECT MAX (cust_id) + 1
INTO v_cust_id
FROM customer a;
RETURN v_cust_id;
END;
It occurred to me as I was writing this that you may have namespace issue: your original function accepts a parameter CUST_ID, then queries a table with column CUST_ID. All of the references to CUST_ID in the query will refer to the column, not the parameter. If the parameter serves a purpose, it is being obscured.
However, you really shouldn't be doing this. If two sessions call this procedure simultaneously and insert the resulting value into a new row, you'll have a primary key violation. This is the entire reason that sequences exist. As sequences are not transactional, multiple sessions that access the same sequence will get different values.
Related
I'm working on a procedure that will declare a variable, take the value from a procedure that increments, and inserts that value along with other parameters into a table. I thought I had it all worked out, but then I got hit with PLS-00103: Encountered symbol "DECLARE" and Encountered symbol "end-of-file". I feel like I'm so close, so any help would be majorly appreciated! Thank you!
create or replace procedure Order_Create(date_order string, cust_id char, total float, employ_id number)
is
DECLARE NewKey;
BEGIN
NewKey := order_auto_inc.nextval;
UPDATE Progressive_Keys set Order_Limit = NewKey;
insert into ORDERS VALUES (Progressive_Keys.Order_Limit, to_date(date_order, 'yyyy-mm-dd'), cust_id, total, employ_id);
commit;
END;
Remove the declare it's not needed in a stored procedures (as documented in the manual).
A variable declaration needs a data type.
As the parameter order_date is supposed to be a date, it should be declared with that type.
You can't access the column order_limit outside of a statement that uses the table progressive_keys so you need to use the variable in the insert statement as well.
It's also good coding practice to always list the target columns in an INSERT statement (note that I just invented some column names for the orders table you will have to adjust them to reflect the real names in your table)
create or replace procedure Order_Create(date_order date, cust_id varchar, total float, employ_id number)
is
NewKey number;
BEGIN
NewKey := order_auto_inc.nextval;
UPDATE Progressive_Keys set Order_Limit = NewKey;
insert into ORDERS (some_key, order_date, customer_id, total, employee_id)
VALUES (newkey, date_order, cust_id, total, employ_id);
commit;
END;
The UPDATE looks a bit strange as it will update all rows in the thable progressive_keys not just one row.
I need to create a procedure which will take collection as an input and compare the data with staging table data row by row for every field (approx 50 columns).
Business logic :
whenever a staging table column value will mismatch with the corresponding collection variable value then i need to update 'FAIL' into staging table STATUS column and reason into REASON column for that row.
If matched then need to update 'SUCCESS' in STATUS column.
Payload will be approx 500 rows in each call.
I have created below sample script:
PKG Specification :
CREATE OR REPLACE
PACKAGE process_data
IS
TYPE pass_data_rec
IS
record
(
p_eid employee.eid%type,
p_ename employee.ename%type,
p_salary employee.salary%type,
p_dept employee.dept%type
);
type p_data_tab IS TABLE OF pass_data_rec INDEX BY binary_integer;
PROCEDURE comp_data(inpt_data IN p_data_tab);
END;
PKG Body:
CREATE OR REPLACE
PACKAGE body process_data
IS
PROCEDURE comp_data (inpt_data IN p_data_tab)
IS
status VARCHAR2(10);
reason VARCHAR2(1000);
cnt1 NUMBER;
v_eid employee_copy.eid%type;
v_ename employee_copy.ename%type;
BEGIN
FOR i IN 1..inpt_data.count
LOOP
SELECT ec1.eid,ec1.ename,COUNT(*) over () INTO v_eid,v_ename,cnt1
FROM employee_copy ec1
WHERE ec1.eid = inpt_data(i).p_eid;
IF cnt1 > 0 THEN
IF (v_eid=inpt_data(i).p_eid AND v_ename = inpt_data(i).p_ename) THEN
UPDATE employee_copy SET status = 'SUCCESS' WHERE eid = inpt_data(i).p_eid;
ELSE
UPDATE employee_copy SET status = 'FAIL' WHERE eid = inpt_data(i).p_eid;
END IF;
ELSE
NULL;
END IF;
END LOOP;
COMMIT;
status :='success';
EXCEPTION
WHEN OTHERS THEN
status:= 'fail';
--reason:=sqlerrm;
END;
END;
But in this approach i have below mentioned issues.
Need to declare all local variables for each column value.
Need to compare all variable data using 'and' operator. Not sure whether it is correct way or not because if there are 50 columns then if condition will become very heavy.
IF (v_eid=inpt_data(i).p_eid AND v_ename = inpt_data(i).p_ename) THEN
Need to update REASON column when any column data mismatched (first mismatched column name) for that row, in this approach i am not able to achieve.
Please suggest any other good way to achieve this requirement.
Edit :
There is only one table at my end i.e target table. Input will come from any other source as collection object.
REVISED Answer
You could load the the records into t temp table, but unless you want additional processing it's not necessary. AFAIK there is no way to identify the offending column (first one only) without slugging through column-by-column. However, your other concern having to declare a variable is not necessary. You can declare a single variable defined as %rowtype which gives you access to each column by name.
Looping through an array of data to find the occasional error is just bad (imho) with SQL available to eliminate the good ones in one fell swoop. And it's available here. Even though your input is a array we can use as a table by using the TABLE operator, which allows an array (collection) as though it were a database table. So the MINUS operator can till be employed. The following routine will set the appropriate status and identify the first miss matched column for each entry in the input array. It reverts to your original definition in package spec, but replaces the comp_data procedure.
create or replace package body process_data
is
procedure comp_data (inpt_data in p_data_tab)
is
-- define local array to hold status and reason for ecah.
type status_reason_r is record
( eid employee_copy.eid%type
, status employee_copy.status%type
, reason employee_copy.reason%type
);
type status_reason_t is
table of status_reason_r
index by pls_integer;
status_reason status_reason_t := status_reason_t();
-- define error array to contain the eid for each that have a mismatched column
type error_eids_t is table of employee_copy.eid%type ;
error_eids error_eids_t;
current_matched_indx pls_integer;
/*
Helper function to identify 1st mismatched column in error row.
Here is where we slug our way through each column to find the first column
value mismatch. Note: There is actually validate the column sequence, but
for purpose here we'll proceed in the input data type definition.
*/
function identify_mismatch_column(matched_indx_in pls_integer)
return varchar2
is
employee_copy_row employee_copy%rowtype;
mismatched_column employee_copy.reason%type;
begin
select *
into employee_copy_row
from employee_copy
where employee_copy.eid = inpt_data(matched_indx_in).p_eid;
-- now begins the task of finding the mismatched column.
if employee_copy_row.ename != inpt_data(matched_indx_in).p_ename
then
mismatched_column := 'employee_copy.ename';
elsif employee_copy_row.salary != inpt_data(matched_indx_in).p_salary
then
mismatched_column := 'employee_copy.salary';
elsif employee_copy_row.dept != inpt_data(matched_indx_in).p_dept
then
mismatched_column := 'employee_copy.dept';
-- elsif continue until ALL columns tested
end if;
return mismatched_column;
exception
-- NO_DATA_FOUND is the one error that cannot actually be reported in the customer_copy table.
-- It occurs when an eid exista in the input data but does not exist in customer_copy.
when NO_DATA_FOUND
then
dbms_output.put_line( 'Employee (eid)='
|| inpt_data(matched_indx_in).p_eid
|| ' does not exist in employee_copy table.'
);
return 'employee_copy.eid ID is NOT in table';
end identify_mismatch_column;
/*
Helper function to find specified eid in the initial inpt_data array
Since the resulting array of mismatching eid derive from a select without sort
there is no guarantee the index values actually match. Nor can we sort to build
the error array, as there is no way to know the order of eid in the initial array.
The following helper identifies the index value in the input array for the specified
eid in error.
*/
function match_indx(eid_in employee_copy.eid%type)
return pls_integer
is
l_at pls_integer := 1;
l_searching boolean := true;
begin
while l_at <= inpt_data.count
loop
exit when eid_in = inpt_data(l_at).p_eid;
l_at := l_at + 1;
end loop;
if l_at > inpt_data.count
then
raise_application_error( -20199, 'Internal error: Find index for ' || eid_in ||' not found');
end if;
return l_at;
end match_indx;
-- Main
begin
-- initialize status table for each input enter
-- additionally this results is a status_reason table in a 1:1 with the input array.
for i in 1..inpt_data.count
loop
status_reason(i).eid := inpt_data(i).p_eid;
status_reason(i).status :='SUCCESS';
end loop;
/*
We can assume the majority of data in the input array is valid meaning the columns match.
We'll eliminate all value rows by selecting each and then MINUSing those that do match on
each column. To accomplish this cast the input with TABLE function allowing it's use in SQL.
Following produces an array of eids that have at least 1 column mismatch.
*/
select p_eid
bulk collect into error_eids
from (select p_eid, p_ename, p_salary, p_dept from TABLE(inpt_data)
minus
select eid, ename, salary, dept from employee_copy
) exs;
/*
The error_eids array now contains the eid for each miss matched data item.
Mark the status as failed, then begin the long hard process of identifying
the first column causing the mismatch.
The following loop used the nested functions to slug the way through.
This keeps the main line logic clear.
*/
for i in 1 .. error_eids.count -- if all inpt_data rows match then count is 0, we bypass the enttire loop
loop
current_matched_indx := match_indx(error_eids(i));
status_reason(current_matched_indx).status := 'FAIL';
status_reason(current_matched_indx).reason := identify_mismatch_column(current_matched_indx);
end loop;
-- update employee_copy with appropriate status for each row in the input data.
-- Except for any cid that is in the error eid table but doesn't exist in the customer_copy table.
forall i in inpt_data.first .. inpt_data.last
update employee_copy
set status = status_reason(i).status
, reason = status_reason(i).reason
where eid = inpt_data(i).p_eid;
end comp_data;
end process_data;
There are a couple other techniques used you may want to look into if you are not familiar with them:
Nested Functions. There are 2 functions defined and used in the procedure.
Bulk Processing. That is Bulk Collect and Forall.
Good Luck.
ORIGINAL Answer
It is NOT necessary to compare each column nor build a string by concatenating. As you indicated comparing 50 columns becomes pretty heavy. So let the DBMS do most of the lifting. Using the MINUS operator does exactly what you need.
... the MINUS operator, which returns only unique rows returned by the
first query but not by the second.
Using that this task needs only 2 Updates: 1 to mark "fail", and 1 to mark "success". So try:
create table e( e_id integer
, col1 varchar2(20)
, col2 varchar2(20)
);
create table stage ( e_id integer
, col1 varchar2(20)
, col2 varchar2(20)
, status varchar2(20)
, reason varchar2(20)
);
-- create package spec and body
create or replace package process_data
is
procedure comp_data;
end process_data;
create or replace package body process_data
is
package body process_data
procedure comp_data
is
begin
update stage
set status='failed'
, reason='No matching e row'
where e_id in ( select e_id
from (select e_id, col1, col2 from stage
except
select e_id, col1, col2 from e
) exs
);
update stage
set status='success'
where status is null;
end comp_data;
end process_data;
-- test
-- populate tables
insert into e(e_id, col1, col2)
select (1,'ABC','def') from dual union all
select (2,'No','Not any') from dual union all
select (3,'ok', 'best ever') from dual union all
select (4,'xx','zzzzzz') from dual;
insert into stage(e_id, col1, col2)
select (1,'ABC','def') from dual union all
select (2,'No','Not any more') from dual union all
select (4,'yy', 'zzzzzz') from dual union all
select (5,'no e','nnnnn') from dual;
-- run procedure
begin
process_data.comp_date;
end;
-- check results
select * from stage;
Don't ask. Yes, you to must list every column you wish compared in each of the queries involved in the MINUS operation.
I know the documentation link is old (10gR2), but actually finding Oracle documentation is a royal pain. But the MINUS operator still functions the same in 19c;
I am trying to use a cursor here, I would like to know how to i access the cursor field in the select column?
I have an implementation as below,
create or replace TYPE "TABLE_TYPE_SAMPLE" AS OBJECT(
ENTITY_NAME VARCHAR2(100)
);
create or replace TYPE "TABLE_SAMPLE" AS TABLE OF TABLE_TYPE_SAMPLE;
CREATE OR REPLACE FUNCTION segmentFields(
txnId VARCHAR2)
RETURN TABLE_SAMPLE
IS
attValue VARCHAR2(20);
curStr VARCHAR2(20);
flexTable TABLE_SAMPLE := TABLE_TYPE_SAMPLE();
CURSOR cur_seg
IS
(SELECT colA
FROM table1 -- (table name has column colA)
WHERE id = txnId
);
BEGIN
FOR cur_recd IN cur_seg
LOOP
curStr := cur_recd.colA;
SELECT curStr into attValue FROM PER_PEOPLE_GROUPS;
flexTable.EXTEND;
flexTable(flexTable.count) := (TABLE_TYPE_SAMPLE(attValue)) ;
END LOOP;
RETURN flexTable;
END;
The function complied without errors. but when I try to run below query
select * from table(segmentFields(480));
I get the below error,
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "SEGMENTFIELDS", line 19
01422. 00000 - "exact fetch returns more than requested number of rows"
*Cause: The number specified in exact fetch is less than the rows returned.
*Action: Rewrite the query or change number of rows requested
I want to understand, what is wrong with this implementation.
Thank you.
The problem is with the select into. Not sure why that's there in the first place. The value from your cursor is available in cur_recd.cola and you can use it directly.
create or replace function segmentfields(txnid varchar2) return table_sample is
flextable table_sample := table_type_sample();
cursor cur_seg is(
select cola
from table1 -- (table name has column colA)
where id = txnid);
begin
for cur_recd in cur_seg
loop
flextable.extend;
flextable(flextable.count) := (table_type_sample(cur_recd.cola));
end loop;
return flextable;
end;
This query has no WHERE clause:
SELECT curStr into attValue FROM PER_PEOPLE_GROUPS;
That means it will return hits for all the rows in PER_PEOPLE_GROUPS. The SELECT ... INTO construct populates a single variable and so requires a query which returns exactly one row. The ORA-01422 message indicates that you're not executing an exact fetch, obviously because PER_PEOPLE_GROUPS has more than one row.
Several possible solutions, depending on what you're trying to achieve:
Add a restriction of some kind so that you only return one row from PER_PEOPLE_GROUPS.
Use BULK COLLECT to populate any array instead.
Replace the SELECT with a simple assignment flexTable(flexTable.count) := (TABLE_TYPE_SAMPLE(cur_recd.colA))
On the face of it, discarding the SELECT seems the best option as it doesn't provide you with any information. However, it also seems likely that you are trying to implement some other business logic which isn't expressed in the posted code, so probably you need to make several changes.
error in row:
SELECT curStr into attValue FROM PER_PEOPLE_GROUPS;
that executes the code? how many rows in PER_PEOPLE_GROUPS?
error indicates that more than one. You may need to put a condition in where clause?
I want to create a procedure in PL/SQL that has 5 steps. Step 1 and 2 execute first and return an ID. In step 3, we have a SELECT statement that has a condition with that returned ID. I want then to take all of the results of that SELECT statement and use them in a JOIN in another SELECT statement and use THOSE results in a 3rd SELECT statement again using JOIN. From what I've seen, I can't use CURSOR in JOIN statements. Some of my co-workers have suggested that I save the results in a CURSOR and then use a loop to iterate through each row and use that data for the next SELECT. However since I'm going to do 2 selects this will create a huge fork of inside loops and that's exactly what I'm trying to avoid.
Another suggestion was to use Temprary Tables to store the data. However this procedure could be executed at the same time by many users and the table's data would conflict with each other. Right now I'm looking at LOCAL Temporary tables that supposedly filter the data according the the session but I'm not really sure I want to create dummy tables for my procedures since I want to avoid leaving trash in the schema (this procedure is for a custom part of the application). Is there a standard way of doing this? Any ideas?
Sample:
DECLARE
USERID INT := 1000000;
TEXT1 VARCHAR(100);
TEXT_INDEX INT;
CURSOR NODES IS SELECT * FROM NODE_TABLE WHERE DESCRIPTION LIKE TEXT || '%';
CURSOR USERS IS SELECT * FROM USERGROUPS JOIN NODES ON NODES.ID = USERGROUPS.ID;
BEGIN
SELECT TEXT INTO TEXT1 FROM TABLE_1 WHERE ID = USERID;
TEXT_INDEX = INSTR(TEXT, '-');
TEXT = SUBSTR(TEXT, 0, TEXT_INDEX);
OPEN NODES;
OPEN USERS;
END;
NOTE: This does NOT work. Oracle doesn't support joins between cursors.
NOTE2: This CAN be done in a single query but for the sake of argument (and in my real use case) I want to break those steps down in a procedure. The sample code is a depiction of what I'm trying to achieve IF joins between cursors worked. But they don't and I'm looking for an alternative.
I ended up using a function (although a procedure could be used as well) along with tables. Things I've learned and one should pay attention to:
PL/SQL functions can only return types that have been declared in the schema in advance and are clear. You can't create a function that returns something like MY_TABLE%ROWTYPE, even though it seems the type information is available it is not acceptable. You have to instead create a custom type of MY_TABLE%ROWTYPE is you want to return it.
Oracle treats tables of declared types differently from tables of %ROWTYPE. This confused the hell out of me at first but from what I've gathered this is how it works.
DECLARE TYPE MY_CUSTOM_TABLE IS TABLE OF MY_TABLE%ROWTYPE;
Declares a collection of types of MY_TABLE row. In order to add to this we must use BULK COLLECT INTO from an SQL statement that queries MY_TABLE. The resulting collection CANNOT be used in JOIN statements is not queryable and CANNOT be returned by a function.
DECLARE
CREATE TYPE MY_CUSTOM_TYPE AS OBJECT (COL_A NUMBER, COL_B NUMBER);
CREATE TYPE MY_CUSTOM_TABLE AS TABLE OF MY_CUSTOM_TYPE;
my_custom_tab MY_CUSTOM_TABLE;
This create my_custom_tab which is a table (not a collection) and if populated can be queried at using TABLE(my_custmo_tab) in the FROM statement. As a table which is declared in advance in the schema this CAN be returned from a function. However it CANNOT be populated using BULK COLLECT INTO since it is not a collection. We must instead use the normal SELECT INTO statement. However, if we want to populate it with data from an existing table that has 2 number columns we cannot simply do SELECT * INTO my_custom_tab FROM DOUBLE_NUMBER_TABLE since my_custom_tab hasn't been initialized and doesn't contain enough rows to receive the data. And if we don't know how many rows a query returns we can't initialize it. The trick into populating the table is to use the CAST command and cast our select result set as a MY_CUSTOM_TABLE and THEN add it.
SELECT CAST(MULTISET(SELECT COL_A, COL_B FROM DOUBLE_NUMBER_TABLE) AS MY_CUSTOM_TABLE) INTO my_custom_tab FROM DUAL
Now we can easily use my_custom_tab in queries etc through the use of the TABLE() function.
SELECT * FROM TABLE(my_custom_tab)
is valid.
You can do such decomposition in many ways, but all of them have a significant performance penalty in comaration with single SQL statement.
Maintainability improvement are also questionable and depends on specific situation.
To review all possibilities please look through documentation.
Below is some possible variants based on simple logic:
calculate Oracle user name prefix based on given Id;
get all users whose name starts with this prefix;
find all tables owned by users from step 2;
count a total number of found tables.
1. pipelined
Prepare types to be used by functions:
create or replace type TUserRow as object (
username varchar2(30),
user_id number,
created date
)
/
create or replace type TTableRow as object (
owner varchar2(30),
table_name varchar2(30),
status varchar2(8),
logging varchar2(3)
-- some other useful fields here
)
/
create or replace type TUserList as table of TUserRow
/
create or replace type TTableList as table of TTableRow
/
Simple function to find prefix by user id:
create or replace function GetUserPrefix(piUserId in number) return varchar2
is
vUserPrefix varchar2(30);
begin
select substr(username,1,3) into vUserPrefix
from all_users
where user_id = piUserId;
return vUserPrefix;
end;
/
Function searching for users:
create or replace function GetUsersPipe(
piNameStart in varchar2
)
return TUserList pipelined
as
vUserList TUserList;
begin
for cUsers in (
select *
from
all_users
where
username like piNameStart||'%'
)
loop
pipe row( TUserRow(cUsers.username, cUsers.user_id, cUsers.created) ) ;
end loop;
return;
end;
Function searching for tables:
create or replace function GetUserTablesPipe(
piUserNameStart in varchar2
)
return TTableList pipelined
as
vTableList TTableList;
begin
for cTables in (
select *
from
all_tables tab_list,
table(GetUsersPipe(piUserNameStart)) user_list
where
tab_list.owner = user_list.username
)
loop
pipe row ( TTableRow(cTables.owner, cTables.table_name, cTables.status, cTables.logging) );
end loop;
return;
end;
Usage in code:
declare
vUserId number := 5;
vTableCount number;
begin
select count(1) into vTableCount
from table(GetUserTablesPipe(GetUserPrefix(vUserId)));
dbms_output.put_line('Users with name started with "'||GetUserPrefix(vUserId)||'" owns '||vTableCount||' tables');
end;
2. Simple table functions
This solution use same types as a variant with pipelined functions above.
Function searching for users:
create or replace function GetUsers(piNameStart in varchar2) return TUserList
as
vUserList TUserList;
begin
select TUserRow(username, user_id, created)
bulk collect into vUserList
from
all_users
where
username like piNameStart||'%'
;
return vUserList;
end;
/
Function searching for tables:
create or replace function GetUserTables(piUserNameStart in varchar2) return TTableList
as
vTableList TTableList;
begin
select TTableRow(owner, table_name, status, logging)
bulk collect into vTableList
from
all_tables tab_list,
table(GetUsers(piUserNameStart)) user_list
where
tab_list.owner = user_list.username
;
return vTableList;
end;
/
Usage in code:
declare
vUserId number := 5;
vTableCount number;
begin
select count(1) into vTableCount
from table(GetUserTables(GetUserPrefix(vUserId)));
dbms_output.put_line('Users with name started with "'||GetUserPrefix(vUserId)||'" owns '||vTableCount||' tables');
end;
3. cursor - xml - cursor
It's is a specific case, which may be implemented without user-defined types but have a big performance penalty, involves unneeded type conversion and have a low maintainability.
Function searching for users:
create or replace function GetUsersRef(
piNameStart in varchar2
)
return sys_refcursor
as
cUserList sys_refcursor;
begin
open cUserList for
select * from all_users
where username like piNameStart||'%'
;
return cUserList;
end;
Function searching for tables:
create or replace function GetUserTablesRef(
piUserNameStart in varchar2
)
return sys_refcursor
as
cTableList sys_refcursor;
begin
open cTableList for
select
tab_list.*
from
(
XMLTable('/ROWSET/ROW'
passing xmltype(GetUsersRef(piUserNameStart))
columns
username varchar2(30) path '/ROW/USERNAME'
)
) user_list,
all_tables tab_list
where
tab_list.owner = user_list.username
;
return cTableList;
end;
Usage in code:
declare
vUserId number := 5;
vTableCount number;
begin
select count(1) into vTableCount
from
XMLTable('/ROWSET/ROW'
passing xmltype(GetUserTablesRef(GetUserPrefix(vUserId)))
columns
table_name varchar2(30) path '/ROW/TABLE_NAME'
)
;
dbms_output.put_line('Users with name started with "'||GetUserPrefix(vUserId)||'" owns '||vTableCount||' tables');
end;
Of course, all variants may be mixed, but SQL looks better at least for simple cases:
declare
vUserId number := 5;
vUserPrefix varchar2(100);
vTableCount number;
begin
-- Construct prefix from Id
select max(substr(user_list.username,1,3))
into vUserPrefix
from
all_users user_list
where
user_list.user_id = vUserId
;
-- Count number of tables owned by users with name started with vUserPrefix string
select
count(1) into vTableCount
from
all_users user_list,
all_tables table_list
where
user_list.username like vUserPrefix||'%'
and
table_list.owner = user_list.username
;
dbms_output.put_line('Users with name started with "'||vUserPrefix||'" owns '||vTableCount||' tables');
end;
P.S. All code only for demonstration purposes: no optimizations and so on.
I have an Oracle 12c database with a table containing an identity column:
CREATE TABLE foo (
id NUMBER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
bar NUMBER
)
Now I want to insert into the table using PL/SQL. Since in practice the table has many columns, I use %ROWTYPE:
DECLARE
x foo%ROWTYPE;
BEGIN
x.bar := 3;
INSERT INTO foo VALUES x;
END;
However, it give me this error:
ORA-32795: cannot insert into a generated always identity column
ORA-06512: at line 5
Since it is very good for code readability and maintainability, I do not want to stop using %ROWTYPE. Since I under no circumstances want to allow anything but the automatically generated ID's I do not want to lift the GENERATED ALWAYS restriction.
This article suggests that the only way to be able to use %ROWTYPE is to switch to GENERATED BY DEFAULT ON NULL. Is there no other way to fix this?
The only thing I can think of, since you're on 12c is to make the identity column INVISIBLE, like the code below.
The problem is that it makes getting a %ROWTYPE with the id a little more difficult, but it's doable.
Of course, it may also confuse other people using your table to not see a primary key!
I don't think I'd do this, but it is an answer to your question, for what that's worth.
DROP TABLE t;
CREATE TABLE t ( id number invisible generated always as identity,
val varchar2(30));
insert into t (val) values ('A');
DECLARE
record_without_id t%rowtype;
CURSOR c_with_id IS SELECT t.id, t.* FROM t;
record_with_id c_with_id%rowtype;
BEGIN
record_without_id.val := 'C';
INSERT INTO t VALUES record_without_id;
-- If you want ID, you must select it explicitly
SELECT id, t.* INTO record_with_id FROM t WHERE rownum = 1;
DBMS_OUTPUT.PUT_LINE(record_with_id.id || ', ' || record_with_id.val);
END;
/
SELECT id, val FROM t;
You can create a view and insert there:
CREATE OR REPLACE VIEW V_FOO AS
SELECT BAR -- all columns apart from virtual columns
FROM foo;
DECLARE
x V_FOO%ROWTYPE;
BEGIN
x.bar := 3;
INSERT INTO V_FOO VALUES x;
END;
I think that a mix of former answers - using view with invisible identity column - is a optimal way to accomplish this task:
create table foo (id number generated always as identity primary key, memo varchar2 (32))
;
create or replace view fooview (id invisible, memo) as select * from foo
;
<<my>> declare
r fooview%rowtype;
id number;
begin
r.memo := 'first row';
insert into fooview values r
returning id into my.id
;
dbms_output.put_line ('inserted '||sql%rowcount||' row(s) id='||id);
end;
/
inserted 1 row(s) id=1