PLSQL Cursor For Loop - oracle

I am attempting to perform a trend analysis in PLSQL using a for loop nested inside of a cursor for loop. The goal is to return an actor between the years (2000-2013) who have acted in at least 8 movies within a 5 year window.
For example, a desired output would be: Wahlberg, Mark played in 10 movies between 2009 and 2013.
Here is the error I receive:
Here is the code i'm working with so far:
DECLARE
t movie.yr%TYPE;
actor_id actor.id%TYPE;
total INTEGER;
name actor.name%TYPE;
CURSOR c_actor IS
select *
from (select actor.name AS name, count(movie.title) AS total
from actor, movie, casting
where movie.id = casting.movie_id
and actor.id = casting.actor_id
and movie.yr >= 2000 and movie.yr <=2013
group by actor.name
order by count(movie.title) DESC)
where rownum <= 10;
BEGIN
for v_actor in c_actor
LOOP
for t in 2000 .. 2009
LOOP
select name, total
into name, total
from actor, movie
where movie.yr between t and t+4
and actor_id = v_actor.actor_id
and total >= 8
group by name;
dbms_output.put_line(name||' played in '||total||' movies between '||t||' and '||t+4);
END LOOP;
END LOOP;
END;

It seems you overcomplicated it. This should work:
begin
for v_actor in (select a.name, count(*) total
from actor a join casting c on a.id = c.actor_id
join movie m on m.id = c.movie_id
where m.yr between 2000 and 2013
group by a.name
having count(*) >= 8
)
loop
dbms_output.put_line(v_actor.name ||' acted ' || v_actor.total ||' times');
end loop;
end;
I think you created a question a couple of hours before (and deleted it), making same mistakes. For example: you created a variable named total, and - at the same time - have it in cursor's select statement. You want to display value fetched by the cursor, not the variable itself, unless cursor fetches into that variable - but that's done when you explicitly open/fetch from the cursor, not within the cursor FOR loop. With it, you use a cursor variable and use it to display those values.

you dont get actor_id in your cursor query.
select *
from
(
select a.name AS name, count(m.title) AS total
from actor a
join casting c
on a.id = c.actor_id
join movie m
on m.id = c.movie_id
where m.yr >= 2000
and m.yr <=2013
group by actor.name
order by count(movie.title) desc
)
where rownum <= 10;

Related

Need help in output for Oracle PLSQL Procedure

Am currently working on an oracle PL SQL procedure to list the project numbers, titles and names of employees who work on a project.
I am able to write a procedure that is able to get this info. However, I can only draw one output at a time as such:
1001 Computation
Alvin
Peter
How can I change my code to output all of the entries at the same time while printing them as such:
[Fragment Example][Showing only 1st 3 entries]
1001 Computation: Alvin, Peter
1002 Study methods: Bob, Robert
1003 Racing car: Robert
[Current Code]
create or replace procedure PROJECTGROUPS(projectid IN WorksOn.P#%TYPE)
is
PID Project.P#%TYPE;
PNAME Project.PTitle%TYPE;
ENAME Employee.Name%TYPE;
CURSOR query is
select Employee.Name from Employee
left outer join WorksOn On Employee.E# = WorksOn.E#
where WorksOn.P# = projectid
order by Employee.Name ASC
fetch first 20 rows only;
--
--
begin
select P#, PTitle into PID, PNAME from project where project.p# = projectid;
DBMS_OUTPUT.PUT_LINE(PID || ' ' || PNAME);
--
open query;
loop
fetch query into ENAME;
if query%NOTFOUND then exit;
end if;
DBMS_OUTPUT.PUT_LINE(ENAME);
end loop;
close query;
end PROJECTGROUPS;
You can directly use it in a single query and loop through it as follows:
CREATE OR REPLACE PROCEDURE PROJECTGROUPS (
PROJECTID IN WORKSON.P#%TYPE
) IS
BEGIN
FOR I IN (
SELECT PROJECT_NAME,
PTITLE,
LISTAGG(EMP_NAME,
',') WITHIN GROUP(
ORDER BY EMP_NAME
) AS EMP_NAMES
FROM (
SELECT P.P# PROJECT_NAME,
P.PTITLE,
E.EMPLOYEE.NAME EMP_NAME,
ROW_NUMBER() OVER(
PARTITION BY P.P#
ORDER BY E.NAME
) AS RN
FROM PROJECT P
JOIN WORKSON W
ON W.P# = P.P#
JOIN EMPLOYEE E
ON E.E# = W.E#
WHERE P.P# = PROJECTID
)
WHERE RN <= 20
GROUP BY PROJECT_NAME,
PTITLE
) LOOP
DBMS_OUTPUT.PUT_LINE(I.PROJECT_NAME
|| ' '
|| I.PTITLE
|| ' : '
|| I.EMP_NAMES);
END LOOP;
END PROJECTGROUPS;
Also, your query is not actually outer joined as you have used the condition in the WHERE clause.
left outer join WorksOn On Employee.E# = WorksOn.E# -- you want outer join
where WorksOn.P# = projected -- but the outer join is converted to inner join

Table variable is filled only with one value

I have a stored procedure which should return several results - but it returns only one row. I think it's the last row in result set.
I am not sure, but I think the problem is in this line of code:
select chi.id bulk collect into v_numbers from dual;
and that this line somehow overrides all previous results (there is several of them for each loop). How to insert into v_numbers without overriding previous results? I know that it's also wrong to insert only one row, but I haven't found solution to insert several rows from chi.
PROCEDURE GET_ATTRIBUTES(
P_AUTH_USE_ID IN NUMBER,
P_CATEGORY_ID IN NUMBER,
P_VERSION_ID IN NUMBER,
P_RESULT OUT TYPES.CURSOR_TYPE
) IS
v_numbers sys.odcinumberlist := null;
BEGIN
FOR item IN
(SELECT ID FROM INV_SRV WHERE SRV_CATEGORY_ID IN
(
SELECT id
FROM inv_srv_category
START WITH parent_category_id = P_CATEGORY_ID
CONNECT BY PRIOR id = parent_category_id
) OR SRV_CATEGORY_ID = P_CATEGORY_ID)
LOOP
for chi in (select s.id
from inv_srv s
start with s.parent_srv_id = item.id
connect by prior s.id = s.parent_srv_id
)
loop
select chi.id bulk collect into v_numbers from dual; --> here I should insert all rows from that loop, but I don't know how
end loop;
END LOOP;
OPEN P_RESULT FOR SELECT t.column_value from table(v_numbers) t; --> only one row is returned
END;
Use BULK COLLECT and FORALL for bulk inserts and better performance. The FORALL statement will allow the DML to be run for each row in the collection without requiring a context switch each time, thus improving the overall performance.
CREATE OR REPLACE PROCEDURE get_attributes (
p_auth_use_id IN NUMBER,
p_category_id IN NUMBER,
p_version_id IN NUMBER,
p_result OUT types.cursor_type
) IS
v_numbers sys.odcinumberlist := NULL;
BEGIN
SELECT s.id
BULK COLLECT --> Bulk collect all values
INTO v_numbers
FROM inv_srv s
start with s.parent_srv_id in (
SELECT ID FROM INV_SRV
WHERE SRV_CATEGORY_ID IN
(
SELECT id
FROM inv_srv_category
START WITH parent_category_id = P_CATEGORY_ID
CONNECT BY PRIOR id = parent_category_id
)
OR SRV_CATEGORY_ID = P_CATEGORY_ID)
connect by prior s.id = s.parent_srv_id;
FORALL i IN 1..v_numbers.COUNT
INSERT INTO your_table VALUES v_numbers ( i ); --> Bulk insert
END;
Every time the loop executes v_numbers will be re populated again and again so, either 1) use v_numbers.extend; v_numbers(v_numbers.last) = "Your Value" or write everything in a single bulk collect.
select s.id
bulk collect into v_numbers
from inv_srv s
start with s.parent_srv_id in (SELECT ID FROM INV_SRV
WHERE SRV_CATEGORY_ID IN
(
SELECT id
FROM inv_srv_category
START WITH parent_category_id = P_CATEGORY_ID
CONNECT BY PRIOR id = parent_category_id
)
OR SRV_CATEGORY_ID = P_CATEGORY_ID)
connect by prior s.id = s.parent_srv_id
This may be considered as a improper use of the PL/SQL loops (often connected with a catastrophic performance) in a situation where a SQL solution exists.
Why don't you simple defines the cursor as follows:
OPEN P_RESULT FOR
select s.id
from inv_srv s
start with s.parent_srv_id in
(SELECT ID FROM INV_SRV WHERE SRV_CATEGORY_ID IN
(SELECT id
FROM inv_srv_category
START WITH parent_category_id = 1
CONNECT BY PRIOR id = parent_category_id
) OR SRV_CATEGORY_ID = 1)
connect by prior s.id = s.parent_srv_id
;
The query is constructed from your outer and inner loop so that it returns the same result.
The transformation may not be trivial in generall case and must be carefully tested, but the performance profit may be high.

For update of this query expression is not allowed (Cursor)

I am running below query
Select location, sum(units) as Units
from
( Select a.from_loc as location, sum(units) as units
from emp a, dept b
where a.id=b.id
Union all
Select a.to_loc as location, sum(units) as units
feom emp a, dept b
where a.id=b.id)
group by location;
Above query is giving me data in below format.
Location | sum(Units)
--------------------
100 | 350
200 | 450
Now i need to update another table Class with units given by above query.
Class is having Location as primary key column and units column also.
I tried to create a cursor but its throwing error, for update cannot be used
Here is snippet of cursor code
Declare
V_location number(20);
V_units (20),
Cursor c1 is
Select location, sum(units) as Units
from
( Select a.from_loc as location, sum(units) as units
from emp a, dept b
where a.id=b.id
Union all
Select a.to_loc as location, sum(units) as units
from emp a, dept b
where a.id=b.id)
group by location -----above select query
for update;
Begin
Open c1;
Loop
Fetch c1 into v_location, v_units;
Exit when c1%notfound;
Update class set units=v_units
where location=v_location;
End loop;
Close c1;
End;
Its throwing For update of this query expression is not allowed
Could someone please let me know what i am doing wrong in here. Or some other approach to update the class table
Would this do any good?
I presume that the FOR UPDATE clause causes problems
I removed the whole DECLARE section and used your SELECT statement in a cursor FOR loop as it is easier to maintain (no opening, closing, exiting, ...)
BEGIN
FOR cur_r IN ( SELECT location, SUM (units) AS units
FROM (SELECT a.from_loc AS location, SUM (units) AS units
FROM emp a, dept b
WHERE a.id = b.id
UNION ALL
SELECT a.to_loc AS location, SUM (units) AS units
FROM emp a, dept b
WHERE a.id = b.id)
GROUP BY location)
LOOP
UPDATE class
SET units = cur_r.units
WHERE location = cur_r.location;
END LOOP;
END;
[EDIT, after reading a comment]
IF-THEN-ELSE is to be done using CASE (or DECODE); for example:
update class set
units = case when location between 1 and 100 then cur_r.units / 10
else cur_r.units / 20
end
where location = cur_r.location

Putting values into a collection for different date ranges

I am writing a PL/SQL procedure which gives the count of a query based on date range values. I want to get the date range dynamically and I have written a cursor for that.
I am using a collection and getting the counts of each month, the problem I am facing is that collection is populated with the count of the last month alone. I want to get the count of all months. Can anyone help?
This is the procedure I have written:
create or replace
Procedure Sample As
Cursor C1 Is
With T As (
select to_date('01-JAN-17') start_date,
Last_Day(Add_Months(Sysdate,-1)) end_date from dual
)
Select To_Char(Add_Months(Trunc(Start_Date,'mm'),Level - 1),'DD-MON-YY') St_Date,
to_char(add_months(trunc(start_date,'mm'),level),'DD-MON-YY') ed_date
From T
Connect By Trunc(End_Date,'mm') >= Add_Months(Trunc(Start_Date,'mm'),Level - 1);
Type T_count_Group_Id Is Table Of number;
V_count_Group_Id T_count_Group_Id;
Begin
For I In C1
Loop
Select Count(Distinct c1) bulk collect Into V_Count_Group_Id From T1
Where C2 Between I.St_Date And I.Ed_Date;
End Loop;
For J In V_Count_Group_Id.First..V_Count_Group_Id.Last
Loop
Dbms_Output.Put_Line(V_Count_Group_Id(J));
end loop;
END SAMPLE;
Your bulk collect query is replacing the contents of the collection each time around the loop; it doesn't append to the collection (if that's what you expected). So after your loop you are only seeing the result of the last bulk collect, which is the latest month from your cursor.
You're also apparently comparing dates as string, which isn't a good idea (unless c2 is stored as a string - which is even worse). And as between is inclusive, you risk including data for the first day of each month in two counts, if the stored time portion is midnight. It's safer to use equality checks for date ranges.
You don't need to use a cursor to get the dates and then individual queries inside that cursor, you can just join your current cursor query to the target table - using an outer join to allow for months with no matching data. Your cursor seems to be looking for all months in the current year, up to the start of the current year, so that could perhaps be simplified to:
with t as (
select add_months(trunc(sysdate, 'YYYY'), level - 1) as st_date,
add_months(trunc(sysdate, 'YYYY'), level) as ed_date
from dual
connect by level < extract(month from sysdate)
)
select t.st_date, t.ed_date, count(distinct t1.c1)
from t
left join t1 on t1.c2 >= t.st_date and t1.c2 < t.ed_date
group by t.st_date, t.ed_date
order by t.st_date;
You can use that to populate your collection:
declare
type t_count_group_id is table of number;
v_count_group_id t_count_group_id;
begin
with t as (
select add_months(trunc(sysdate, 'YYYY'), level - 1) as st_date,
add_months(trunc(sysdate, 'YYYY'), level) as ed_date
from dual
connect by level < extract(month from sysdate)
)
select count(distinct t1.c1)
bulk collect into v_count_group_id
from t
left join t1 on t1.c2 >= t.st_date and t1.c2 < t.ed_date
group by t.st_date, t.ed_date
order by t.st_date;
for j in v_count_group_id.first..v_count_group_id.last
loop
dbms_output.put_line(v_count_group_id(j));
end loop;
end;
/
although as it only stores/shows the counts, without saying which month they belong to, that might not ultimately be what you really need. As the counts are ordered, you at least know that the first element in the collection represents January, i suppose.

how to make selecting random rows in oracle faster with table with millions of rows

Is there a way to make selecting random rows faster in oracle with a table that has million of rows. I tried to use sample(x) and dbms_random.value and its taking a long time to run.
Thanks!
Using appropriate values of sample(x) is the fastest way you can. It's block-random and row-random within blocks, so if you only want one random row:
select dbms_rowid.rowid_relative_fno(rowid) as fileno,
dbms_rowid.rowid_block_number(rowid) as blockno,
dbms_rowid.rowid_row_number(rowid) as offset
from (select rowid from [my_big_table] sample (.01))
where rownum = 1
I'm using a subpartitioned table, and I'm getting pretty good randomness even grabbing multiple rows:
select dbms_rowid.rowid_relative_fno(rowid) as fileno,
dbms_rowid.rowid_block_number(rowid) as blockno,
dbms_rowid.rowid_row_number(rowid) as offset
from (select rowid from [my_big_table] sample (.01))
where rownum <= 5
FILENO BLOCKNO OFFSET
---------- ---------- ----------
152 2454936 11
152 2463140 32
152 2335208 2
152 2429207 23
152 2746125 28
I suspect you should probably tune your SAMPLE clause to use an appropriate sample size for what you're fetching.
Start with Adam's answer first, but if SAMPLE just isn't fast enough, even with the ROWNUM optimization, you can use block samples:
....FROM [table] SAMPLE BLOCK (0.01)
This applies the sampling at the block level instead of for each row. This does mean that it can skip large swathes of data from the table so the sample percent will be very rough. It's not unusual for a SAMPLE BLOCK with a low percentage to return zero rows.
Here's the same question on AskTom:
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6075151195522
If you know how big your table is, use sample block as described above. If you don't, you can modify the routine below to get however many rows you want.
Copied from: http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6075151195522#56174726207861
create or replace function get_random_rowid
( table_name varchar2
) return urowid
as
sql_v varchar2(100);
urowid_t dbms_sql.urowid_table;
cursor_v integer;
status_v integer;
rows_v integer;
begin
for exp_v in -6..2 loop
exit when (urowid_t.count > 0);
if (exp_v < 2) then
sql_v := 'select rowid from ' || table_name
|| ' sample block (' || power(10, exp_v) || ')';
else
sql_v := 'select rowid from ' || table_name;
end if;
cursor_v := dbms_sql.open_cursor;
dbms_sql.parse(cursor_v, sql_v, dbms_sql.native);
dbms_sql.define_array(cursor_v, 1, urowid_t, 100, 0);
status_v := dbms_sql.execute(cursor_v);
loop
rows_v := dbms_sql.fetch_rows(cursor_v);
dbms_sql.column_value(cursor_v, 1, urowid_t);
exit when rows_v != 100;
end loop;
dbms_sql.close_cursor(cursor_v);
end loop;
if (urowid_t.count > 0) then
return urowid_t(trunc(dbms_random.value(0, urowid_t.count)));
end if;
return null;
exception when others then
if (dbms_sql.is_open(cursor_v)) then
dbms_sql.close_cursor(cursor_v);
end if;
raise;
end;
/
show errors
Below Solution to this question is not the exact answer but in many scenarios you try to select a row and try to use it for some purpose and then update its status with "used" or "done" so that you do not select it again.
Solution:
Below query is useful but that way if your table is large, I just tried and see that you definitely face performance problem with this query.
SELECT * FROM
( SELECT * FROM table
ORDER BY dbms_random.value )
WHERE rownum = 1
So if you set a rownum like below then you can work around the performance problem. By incrementing rownum you can reduce the possiblities. But in this case you will always get rows from the same 1000 rows. If you get a row from 1000 and update its status with "USED", you will almost get different row everytime you query with "ACTIVE"
SELECT * FROM
( SELECT * FROM table
where rownum < 1000
and status = 'ACTIVE'
ORDER BY dbms_random.value )
WHERE rownum = 1
update the rows status after selecting it, If you can not update that means another transaction has already used it. Then You should try to get a new row and update its status. By the way, getting the same row by two different transaction possibility is 0.001 since rownum is 1000.
Someone told sample(x) is the fastest way you can.
But for me this method works slightly faster than sample(x) method.
It should take fraction of the second (0.2 in my case) no matter what is the size of the table. If it takes longer try to use hints (--+ leading(e) use_nl(e t) rowid(t)) can help
SELECT *
FROM My_User.My_Table
WHERE ROWID = (SELECT MAX(t.ROWID) KEEP(DENSE_RANK FIRST ORDER BY dbms_random.value)
FROM (SELECT o.Data_Object_Id,
e.Relative_Fno,
e.Block_Id + TRUNC(Dbms_Random.Value(0, e.Blocks)) AS Block_Id
FROM Dba_Extents e
JOIN Dba_Objects o ON o.Owner = e.Owner AND o.Object_Type = e.Segment_Type AND o.Object_Name = e.Segment_Name
WHERE e.Segment_Name = 'MY_TABLE'
AND(e.Segment_Type, e.Owner, e.Extent_Id) =
(SELECT MAX(e.Segment_Type) AS Segment_Type,
MAX(e.Owner) AS Owner,
MAX(e.Extent_Id) KEEP(DENSE_RANK FIRST ORDER BY Dbms_Random.Value) AS Extent_Id
FROM Dba_Extents e
WHERE e.Segment_Name = 'MY_TABLE'
AND e.Owner = 'MY_USER'
AND e.Segment_Type = 'TABLE')) e
JOIN My_User.My_Table t
ON t.Rowid BETWEEN Dbms_Rowid.Rowid_Create(1, Data_Object_Id, Relative_Fno, Block_Id, 0)
AND Dbms_Rowid.Rowid_Create(1, Data_Object_Id, Relative_Fno, Block_Id, 32767))
Version with retries when no rows returned:
WITH gen AS ((SELECT --+ inline leading(e) use_nl(e t) rowid(t)
MAX(t.ROWID) KEEP(DENSE_RANK FIRST ORDER BY dbms_random.value) Row_Id
FROM (SELECT o.Data_Object_Id,
e.Relative_Fno,
e.Block_Id + TRUNC(Dbms_Random.Value(0, e.Blocks)) AS Block_Id
FROM Dba_Extents e
JOIN Dba_Objects o ON o.Owner = e.Owner AND o.Object_Type = e.Segment_Type AND o.Object_Name = e.Segment_Name
WHERE e.Segment_Name = 'MY_TABLE'
AND(e.Segment_Type, e.Owner, e.Extent_Id) =
(SELECT MAX(e.Segment_Type) AS Segment_Type,
MAX(e.Owner) AS Owner,
MAX(e.Extent_Id) KEEP(DENSE_RANK FIRST ORDER BY Dbms_Random.Value) AS Extent_Id
FROM Dba_Extents e
WHERE e.Segment_Name = 'MY_TABLE'
AND e.Owner = 'MY_USER'
AND e.Segment_Type = 'TABLE')) e
JOIN MY_USER.MY_TABLE t ON t.ROWID BETWEEN Dbms_Rowid.Rowid_Create(1, Data_Object_Id, Relative_Fno, Block_Id, 0)
AND Dbms_Rowid.Rowid_Create(1, Data_Object_Id, Relative_Fno, Block_Id, 32767))),
Retries(Cnt, Row_Id) AS (SELECT 1, gen.Row_Id
FROM Dual
LEFT JOIN gen ON 1=1
UNION ALL
SELECT Cnt + 1, gen.Row_Id
FROM Retries
LEFT JOIN gen ON 1=1
WHERE Retries.Row_Id IS NULL AND Retries.Cnt < 10)
SELECT *
FROM MY_USER.MY_TABLE
WHERE ROWID = (SELECT Row_Id
FROM Retries
WHERE Row_Id IS NOT NULL)
Can you use pseudorandom rows?
select * from (
select * from ... where... order by ora_hash(rowid)
) where rownum<100

Resources