SQL Server stored procedure slows every execution - performance

I've got a stored procedure which does many selects and updates with some cursors.
When I execute the procedure the first time, it takes about 30 seconds. Second execution takes about 1 minute. Third about 2 minutes.
Every execution slows the procedure. Now it takes about 10 minutes.
What is going wrong?
Variables:
declare #StatistikStatus nvarchar(100)
declare #SQL as nvarchar(MAX)
declare #Datum as nvarchar(50) --Datum im nvarchar Format
declare #Datumdatetime datetime --Datum im datetime Format
declare #tickethistorieID as uniqueidentifier
declare #id int --ID der Terminauswertung. Wird bei Einträgen benötigt, die pro Ticket mehrere Termine vereinbart haben.
declare #nextTermin datetime --Wird bei Einträgen benötigt, die pro Ticket mehrere Termine vereinbart haben.
declare #status nvarchar(100)
declare #statusdiff as nvarchar(100)
declare #vorStatus as nvarchar(100)
declare #lastid as int
declare #tickethistoriemerker nvarchar(40)
declare #statistikstatusmerker nvarchar(100)
DECLARE #TicketID uniqueidentifier
Sample cursor:
DECLARE C_TicketHistorie CURSOR FOR
SELECT
dbo.TicketHistorie.TicketID,dbo.TicketHistorie.Datum,dbo.tickethistorie.tickethistorieid
FROM
dbo.TicketHistorie INNER JOIN
dbo.Status ON dbo.TicketHistorie.NeueStatusID = dbo.Status.StatusID
INNER JOIN dbo.StatuszuStatistikStatus as s on s.status_ID = dbo.Status.statusid
INNER JOIN dbo.StatistikStatus as ss on s.bewertung_id = ss.id
WHERE
ss.id = 5 AND -- 5 = HNR Terminbestätigung
(dbo.Status.Name = N'Termin vereinbart')
AND ((YEAR(dbo.TicketHistorie.Datum) >= 2011 and day(dbo.TicketHistorie.Datum) >= 27 and month(dbo.TicketHistorie.Datum) >= 12)or YEAR(dbo.TicketHistorie.Datum) >= 2012)
ORDER BY TicketID,Datum asc
OPEN C_TicketHistorie;
FETCH NEXT FROM C_TicketHistorie into #TicketID,#Datumdatetime,#TickethistorieID
WHILE ##FETCH_STATUS = 0
BEGIN
--some inserts etc.
FETCH NEXT FROM C_TicketHistorie into #TicketID,#Datumdatetime,#TickethistorieID
END
CLOSE C_TicketHistorie
DEALLOCATE C_TicketHistorie
I've got 4 cursors.
And some dynamix SQL like this
SET #SQL ='UPDATE Statistik.dbo.terminauswertungab27122011 SET ['
SET #SQL =#SQL + #StatistikStatus+']='''
SET #SQL =#SQL + cast(#TicketHistorieID as NVARCHAR(36))+''''
SET #SQL =#SQL + ' WHERE ID = ' + cast(#ID as nvarchar) +' and ['+#StatistikStatus+'] IS NULL'
EXEC (#SQL)
I call the procedure using SSMS.
at the beginning of the stp i delete the table where the inserts goes into. Then iam doing the Inserts. the table rows are the same every execution

First off, change your cursor declaration to a more efficient cursor:
DECLARE C_TicketHistorie CURSOR
LOCAL STATIC FORWARD_ONLY READ_ONLY
FOR
Next, are you sure that you need a cursor for these operations? It seems that your update, for example, could be accomplished as a single set-based operation instead of a cursor and dynamic SQL, especially if you know the set of column names that could be indicated by #StatistikStatus (where is this determined, by the way?). Here is how you could generate a set-based dynamic SQL update in one swoop instead of a cursor:
DECLARE #sql NVARCHAR(MAX) = N'';
WITH x AS
(
SELECT
-- why only use aliases for the tables you don't reference often?
th.TicketID, th.Datum, th.tickethistorieid
FROM
dbo.TicketHistorie AS th
INNER JOIN dbo.Status AS st
ON th.NeueStatusID = st.StatusID
INNER JOIN dbo.StatuszuStatistikStatus as s
on s.status_ID = st.statusid
INNER JOIN dbo.StatistikStatus as ss
on s.bewertung_id = ss.id
WHERE
ss.id = 5 -- 5 = HNR Terminbestätigung
AND st.Name = N'Termin vereinbart'
-- be smarter about date range queries!
AND dbo.TicketHistorie.Datum >= '2011127'
)
SELECT #sql += N'UPDATE Statistik.dbo.terminauswertungab27122011 SET ['
+ #StatistikStatus+']='''
+ cast(#TicketHistorieID as NVARCHAR(36))+''''
+ ' WHERE ID = ' + cast(#ID as nvarchar) + ' -- nvarchar(WHAT)?
and ['+#StatistikStatus+'] IS NULL;';
Probably a lot more optimization possible here, but as the comments suggest, tough to do without more info.

Related

Stored procedure is taking too much time to update the table columns

I have created a stored procedure which is taking too much of time to update the columns of the table. Say 3 hrs to update 2.5k records out of 43k records.
So can I reduce the time of updating the records. Below is my logic for the same.
procedure UPDATE_MST_INFO_BKC
(
P_SAPID IN NVARCHAR2
)
as
v_cityname varchar2(500):='';
v_neid varchar2(500):='';
v_latitude varchar2(500):='';
v_longitude varchar2(500):='';
v_structuretype varchar2(500):='';
v_jc_name varchar2(500):='';
v_jc_code varchar2(500):='';
v_company_code varchar2(500):='';
v_cnt number :=0;
begin
select count(*) into v_cnt from structure_enodeb_mapping where RJ_SAPID=P_SAPID and rownum=1;
if v_cnt > 0 then
begin
select RJ_CITY_NAME, RJ_NETWORK_ENTITY_ID,LATITUDE,LONGITUDE,RJ_STRUCTURE_TYPE,RJ_JC_NAME,RJ_JC_CODE,'6000'
into v_cityname,v_neid,v_latitude, v_longitude, v_structuretype,v_jc_name,v_jc_code,v_company_code from structure_enodeb_mapping where RJ_SAPID=P_SAPID and rownum=1;
update tbl_ipcolo_mast_info set
CITY_NAME = v_cityname,
NEID = v_neid,
FACILITY_LATITUDE = v_latitude,
FACILITY_LONGITUDE = v_longitude,
RJ_STRUCTURE_TYPE = v_structuretype,
RJ_JC_NAME = v_jc_name,
RJ_JC_CODE = v_jc_code,
COMPANY_CODE = v_company_code
where SAP_ID=P_SAPID;
end;
end if;
end UPDATE_MST_INFO_BKC;
What adjustments can I make to this?
As far as I understand your code, It is updating TBL_IPCOLO_MAST_INFO having SAP_ID = P_SAPID Means It is updating one record and you must be calling the procedure for each record.
It is a good practice of calling the procedure once and update all the record in one go. (In your case 2.5k records must be updated in one call of this procedure only)
For your requirement, Currently, I have updated the procedure code to only execute MERGE statement, which will be same as multiple SQLs in your question for single P_SAPID.
PROCEDURE UPDATE_MST_INFO_BKC (
P_SAPID IN NVARCHAR2
) AS
BEGIN
MERGE INTO TBL_IPCOLO_MAST_INFO I
USING (
SELECT
RJ_CITY_NAME,
RJ_NETWORK_ENTITY_ID,
LATITUDE,
LONGITUDE,
RJ_STRUCTURE_TYPE,
RJ_JC_NAME,
RJ_JC_CODE,
'6000' AS COMPANY_CODE,
RJ_SAPID
FROM
STRUCTURE_ENODEB_MAPPING
WHERE
RJ_SAPID = P_SAPID
AND ROWNUM = 1
)
O ON ( I.SAP_ID = O.RJ_SAPID )
WHEN MATCHED THEN
UPDATE SET I.CITY_NAME = O.RJ_CITY_NAME,
I.NEID = O.RJ_NETWORK_ENTITY_ID,
I.FACILITY_LATITUDE = O.LATITUDE,
I.FACILITY_LONGITUDE = O.LONGITUDE,
I.RJ_STRUCTURE_TYPE = O.RJ_STRUCTURE_TYPE,
I.RJ_JC_NAME = O.RJ_JC_NAME,
I.RJ_JC_CODE = O.RJ_JC_CODE,
I.COMPANY_CODE = O.COMPANY_CODE;
END UPDATE_MST_INFO_BKC;
Cheers!!
3 hours? That's way too much. Are sap_id columns indexed? Even if they aren't, data set of 43K rows is just too small.
How do you call that procedure? Is it part of another code, perhaps some unfortunate loop which does something row-by-row (which is, in turn, slow-by-slow)?
A few objections:
are all those variables' datatypes really varchar2(500)? Consider declaring them so that they'd take table column's datatype, e.g. v_cityname structure_enodeb_mapping.rj_city_name%type;. Also, there's no need to explicitly say that their value is null (:= ''), it is so by default
select statement which checks whether there's something in the table for that parameter's value should be rewritten to use EXISTS as it should perform better than rownum = 1 condition you used.
also, consider using exception handlers (no-data-found if there's no row for a certain ID; too-many-rows if there are two or more rows)
select statement that collects data into variables has the same condition; do you really expect more than a single row for each ID (passed as a parameter)?
Anyway, the whole procedure's code can be shortened to a single update statement:
update tbl_ipcolo_mst_info t set
(t.city_name, t.neid, ...) = (select s.rj_city_name,
s.rj_network_entity_id, ...
from structure_enodeb_mapping s
where s.rj_sapid = t.sap_id
)
where t.sap_id = p_sapid;
If there is something to be updated, it will be. If there's no matching t.sap_id, nothing will happen.

Create simple PL/SQL variable - Use Variable in WHERE clause

Thanks for looking...
I've spent hours researching this and I can't believe it's that difficult to do something in PL/SQL that is simple in TSQL.
I have a simple query that joins 2 tables:
Select DISTINCT
to_char(TO_DATE('1899123000', 'yymmddhh24')+ seg.NOM_DATE, 'mm/dd/yyyy') AS "Record Date"
, cd.CODE
, EMP.ID
, EMP.SHORT_NAME
FROM
EWFM.GEN_SEG seg join EWFM.SEG_CODE cd ON seg.SEG_CODE_SK = cd.SEG_CODE_SK
join EMP on seg.EMP_SK = EMP.EMP_SK
where NOM_DATE = vMyDate;
I use Toad Date Point and I'm querying against an Oracle Exadata source. The resulting query will be dropped into a visualization tool like QlikView or Tableau. I'd like to create a simple variable to use the the WHERE clause as you can see in the code.
In this example, NOM_DATE is an integer such as 42793 (2/27/2017) as you can see in the first row "Record Date". Nothing new here, not very exciting... Until... I tried to create a variable to make the query more dynamic.
I've tried a surprising variety of examples found here, all have failed. Such as:
declare
myDate number(8);
Begin
myDate := 42793;
--Fail ORA-06550 INTO Clause is expected
variable nomDate NUMBER
DEFINE nomDate = 42793
EXEC : nomDate := ' & nomDate'
...where NOM_DATE = ( & nomDate) ;
--ORA-00900: invalid SQL statement
and
variable nomDate NUMBER;
EXEC nomDate := 42793;
select count(DET_SEG_SK) from DET_SEG
where NOM_DATE = :nomDate;
--ORA-00900: invalid SQL statement
and several more.. hopefully you get the idea. I've spent a few hours researching stackoverflow for a correct answer but as you can see, I'm asking you. From simple declarations like "Var" to more complex " DECLARE, BEGIN, SELECT INTO...." to actually creating Functions, using cursors to iterate the output.... I still can't make a simple variable to use in a Where clause.
Please explain the error of my ways.
--Forlorn SQL Dev
Since you are using an implicit cursor, you have to select then INTO variables. Now I d not know the data types of you variables, so I have just guessed in this example below, but hopefully you get the point.
Two other things I should mention
Why are you TO_CHARing you DATE. Just use a DATE datatype. Also, I think your format mask is wrong too 1899123000 does not match yymmddhh24.
In explicit cursor expects exactly one row; no rows and you get NO_DATA_FOUND; more than one and you get TOO_MANY_ROWS
Declare
myDate number(8) := 42793;
/* These 4 variable data types are a guess */
v_record_date varchar2(8);
v_cd_code varchar2(10);
v_emp_id number(4);
v_emp_short_name varchar2(100);
BEGIN
Select DISTINCT to_char(TO_DATE('1899123000', 'yymmddhh24')
+ eg.NOM_DATE, 'mm/dd/yyyy') AS "Record Date"
, cd.CODE
, EMP.ID
, EMP.SHORT_NAME
INTO v_record_date, v_cd_code, v_emp_id, v_emp_short_name
FROM EWFM.GEN_SEG seg
join EWFM.SEG_CODE cd
ON seg.SEG_CODE_SK = cd.SEG_CODE_SK
join EMP
on seg.EMP_SK = EMP.EMP_SK
where NOM_DATE = myDate;
END;
/
VARIABLE vMyDate NUMBER;
BEGIN
:vMyDate := 42793;
END;
/
-- or
-- EXEC :vMyDate := 42793;
SELECT DISTINCT
TO_CHAR( DATE '1899-12-30' + seg.NOM_DATE, 'mm/dd/yyyy') AS "Record Date"
, cd.CODE
, EMP.ID
, EMP.SHORT_NAME
FROM EWFM.GEN_SEG seg
join EWFM.SEG_CODE cd
ON seg.SEG_CODE_SK = cd.SEG_CODE_SK
join EMP
on seg.EMP_SK = EMP.EMP_SK
WHERE NOM_DATE = :vMyDate;
You put the variables with getter and setter in a package.
Then use a view that uses the package getter
Personally I prefer to use a collection that way I can do a select * from table (packagage.func(myparam))

Merge statement without affecting records where there is no change in data

I have a stored procedure that takes data from several tables and creates a new table with just the columns I want. I now want to increase performance by only attempting to insert/update rows that have at least one column of new data. For existing rows that would only receive the exact data it already has, I want to skip the update altogether for that row.
For example if a row contains the data:
ID | date | population | gdp
15 | 01-JUN-10 | 1,530,000 | $67,000,000,000
and the merge statement comes for ID 15 and date 01-JUN-10 with population 1,530,000 and gdp $67,000,000,000 then I don't want to update that row.
Here are some snippets of my code:
create or replace PROCEDURE COUNTRY (
fromDate IN DATE,
toDate IN DATE,
filterDown IN INT,
chunkSize IN INT
) AS
--cursor
cursor cc is
select c.id, cd.population_total_count, cd.evaluation_date, cf.gdp_total_dollars
from countries c
join country_demographics cd on c.id = cd.country_id
join country_financials cf on cd.country_id = cf.country_id and cf.evaluation_date = cd.evaluation_date
where cd.evaluation_date > fromDate and cd.evaluation_date < toDate
order by c.id,cd.evaluation_date;
--table
type cc_table is table of cc%rowtype;
c_table cc_table;
BEGIN
open cc;
loop -- cc loop
fetch cc bulk collect into c_table limit chunkSize; --limit by chunkSize parameter
forall j in 1..c_table.count
merge
into F_AMB_COUNTRY_INFO_16830 tgt
using (
select c_table(j).id cid,
c_table(j).evaluation_date eval_date,
c_table(j).population_total_count pop,
c_table(j).gdp_total_dollars gdp
from dual
) src
on ( cid = tgt.country_id AND eval_date = tgt.evaluation_date )
when matched then
update
set tgt.population_total_count = pop,
tgt.gdp_total_dollars = gdp
when not matched then
insert (
tgt.country_id,
tgt.evaluation_date,
tgt.population_total_count,
tgt.gdp_total_dollars )
values (
cid,
eval_date,
pop,
gdp );
exit when c_table.count = 0; --quit condition for cc loop
end loop; --end cc loop
close cc;
EXCEPTION
when ACCESS_INTO_NULL then -- catch error when table does not exist
dbms_output.put_line('Error ' || SQLCODE || ': ' || SQLERRM);
END ;
I was thinking that in the on statement, I could just say something along the lines of:
on ( cid = tgt.country_id AND eval_date = tgt.evaluation_date
AND pop != tgt.population_total_count AND gdp != tgt.gdp_total_dollars )
but surely there's a cleaner / more efficient way to do it?
The otherway you could do it is use ora_hash to get a hash of the row. So your where clause could be something like.
where ora_hash(src.col1 || src.col2 || src.col3 || src.col4) = ora_hash(src.col1 || src.col2 || src.col3 || src.col4)

Drop all the synonyms using cursor

I wanted to drop all the synonyms of a database (sql server 2008 r2) using cursor.
environment-database name- 'mydatabase', schema name- 'dbo'..
Can you please guide me as i did try but the statement of while .. end, is not able to drop the synonym.
what logic should be apply w.r.t cursor?
No need to use a cursor. Do it as set:
declare #n char(1)
set #n = char(10)
declare #stmt nvarchar(max)
select #stmt = isnull( #stmt + #n, '' ) +
'drop synonym [' + SCHEMA_NAME(schema_id) + '].[' + name + ']'
from sys.synonyms
exec sp_executesql #stmt
Similar to Jason's answer with some improvements
Use Quotename() function to wrap the names in square brackets
Initialize the #SQL variable to an empty string, this means that the isnull is not required, and means that when you concatenate the results of the query into a single string it doesn't get the type wrong. String literals in a concatenation can take the default nvarchar size and cause your resulting string to be truncated unexpectedly.
Ensure the string literals are also nvarchar by using the N in front of them.
Filter to the dbo schema only, as the OP requested.
Add the sys schema to the sp_executesql call
Totally agree this is not something where you need a cursor.
DECLARE #SQL NVARCHAR(MAX) = N''
SELECT #SQL += N'DROP SYNONYM ' + QUOTENAME(SCHEMA_NAME([schema_id])) + N'.' + QUOTENAME(name) + N';' + Char(13) + Char(10)
FROM sys.synonyms
WHERE SCHEMA_NAME([schema_id]) = N'dbo'
EXEC sys.sp_executesql #SQL

ways to avoid global temp tables in oracle

We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's
Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues.
what other alternatives are there? Collections? Cursors?
Our typical use of GTT's is like so:
Insert into GTT
INSERT INTO some_gtt_1
(column_a,
column_b,
column_c)
(SELECT someA,
someB,
someC
FROM TABLE_A
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
Update the GTT
UPDATE some_gtt_1 a
SET column_a = (SELECT b.data_a FROM some_table_b b
WHERE a.column_b = b.data_b AND a.column_c = 'C')
WHERE column_a IS NULL OR column_a = ' ';
and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries.
I have a three part question:
Can someone show how to transform
the above sample queries to
collections and/or cursors?
Since
with GTT's you can work natively
with SQL...why go away from the
GTTs? are they really that bad.
What should be the guidelines on
When to use and When to avoid GTT's
Let's answer the second question first:
"why go away from the GTTs? are they
really that bad."
A couple of days ago I was knocking up a proof of concept which loaded a largish XML file (~18MB) into an XMLType. Because I didn't want to store the XMLType permanently I tried loading it into a PL/SQL variable (session memory) and a temporary table. Loading it into a temporary table took five times as long as loading it into an XMLType variable (5 seconds compared to 1 second). The difference is because temporary tables are not memory structures: they are written to disk (specifically your nominated temporary tablespace).
If you want to cache a lot of data then storing it in memory will stress the PGA, which is not good if you have lots of sessions. So it's a trade-off between RAM and time.
To the first question:
"Can someone show how to transform the
above sample queries to collections
and/or cursors?"
The queries you post can be merged into a single statement:
SELECT case when a.column_a IS NULL OR a.column_a = ' '
then b.data_a
else column_a end AS someA,
a.someB,
a.someC
FROM TABLE_A a
left outer join TABLE_B b
on ( a.column_b = b.data_b AND a.column_c = 'C' )
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
(I have simply transposed your logic but that case() statement could be replaced with a neater nvl2(trim(a.column_a), a.column_a, b.data_a) ).
I know you say your queries are more complicated but your first port of call should be to consider rewriting them. I know how seductive it is to break a gnarly query into lots of baby SQLs stitched together with PL/SQL but pure SQL is way more efficient.
To use a collection it is best to define the types in SQL, because it gives us the flexibility to use them in SQL statements as well as PL/SQL.
create or replace type tab_a_row as object
(col_a number
, col_b varchar2(23)
, col_c date);
/
create or replace type tab_a_nt as table of tab_a_row;
/
Here's a sample function, which returns a result set:
create or replace function get_table_a
(p_arg in number)
return sys_refcursor
is
tab_a_recs tab_a_nt;
rv sys_refcursor;
begin
select tab_a_row(col_a, col_b, col_c)
bulk collect into tab_a_recs
from table_a
where col_a = p_arg;
for i in tab_a_recs.first()..tab_a_recs.last()
loop
if tab_a_recs(i).col_b is null
then
tab_a_recs(i).col_b := 'something';
end if;
end loop;
open rv for select * from table(tab_a_recs);
return rv;
end;
/
And here it is in action:
SQL> select * from table_a
2 /
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 12-JUN-10
SQL> var rc refcursor
SQL> exec :rc := get_table_a(1)
PL/SQL procedure successfully completed.
SQL> print rc
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 something 12-JUN-10
SQL>
In the function it is necessary to instantiate the type with the columns, in order to avoid the ORA-00947 exception. This is not necessary when populating a PL/SQL table type:
SQL> create or replace procedure pop_table_a
2 (p_arg in number)
3 is
4 type table_a_nt is table of table_a%rowtype;
5 tab_a_recs table_a_nt;
6 begin
7 select *
8 bulk collect into tab_a_recs
9 from table_a
10 where col_a = p_arg;
11 end;
12 /
Procedure created.
SQL>
Finally, guidelines
"What should be the guidelines on When
to use and When to avoid GTT's"
Global temp tables are very good when we need share cached data between different program units in the same session. For instance if we have a generic report structure generated by a single function feeding off a GTT which is populated by one of several procedures. (Although even that could also be implemented with dynamic ref cursors ...)
Global temporary tables are also good if we have a lot of intermediate processing which is just too complicated to be solved with a single SQL query. Especially if that processing must be applied to subsets of the retrieved rows.
But in general the presumption should be that we don't need to use a temporary table. So
Do it in SQL unless it is too hard it which case ...
... Do it in PL/SQL variables (usually collections) unless it takes too much memory it which case ...
... Do it with a Global Temporary Table
Generally I'd use a PL/SQL collection for storing small volumes of data (maybe a thousand rows). If the data volumes were much larger, I'd use a GTT so that they don't overload the process memory.
So I might select a few hundred rows from the database into a PL/SQL collection, then loop through them to do some calculation/delete a few or whatever, then insert that collection into another table.
If I was dealing with hundreds of thousands of rows, I would try to push as much of the 'heavy lifting' processing into large SQL statements. That may or may not require GTT.
You can use SQL level collection objects as something that translates quite easily between SQL and PL/SQL
create type typ_car is object (make varchar2(10), model varchar2(20), year number(4));
/
create type typ_coll_car is table of typ_car;
/
select * from table (typ_coll_car(typ_car('a','b',1999), typ_car('A','Z',2000)));
MAKE MODEL YEAR
---------- -------------------- ---------------
a b 1,999.00
A Z 2,000.00
declare
v_car1 typ_car := typ_car('a','b',1999);
v_car2 typ_car := typ_car('A','Z',2000);
t_car typ_coll_car := typ_coll_car();
begin
t_car := typ_coll_car(v_car1, v_car2);
FOR i in (SELECT * from table(t_car)) LOOP
dbms_output.put_line(i.year);
END LOOP;
end;
/

Resources