Oracle subquery internal error - oracle

The query in listing 1 joins two subqueries, both of which are computed from two named subqueries (ANIMAL and SEA_CREATURE). The output should list animals that don't live in the sea, and list animals that do live in the sea.
When run in a console window (SQL Navigator 5.5), the server returns error:
15:21:30 ORA-00600: internal error code, arguments: [evapls1], [], [], [], [], [], [], []
Why? And how to get around it?
Interesting to note, I can run the same query in a program written in Delphi XE7 (using TSQLQuery component), and it works ok. But this is not a problem with SQL Navigator. If I create a view containing the expression in listing 1, selecting from the view does not output an error. The problem is in the oracle server.
If I make the ANIMAL subquery really simple, like in Listing 2, it works. but anything else, even just selecting from a table, results in this internal error.
Listing 1: (Outputs error)
with ANIMAL as (
select ANIMAL_NAME
from xmltable( 't/e' passing xmltype( '<t><e>Tuna</e><e>Cat</e><e>Dolphin</e><e>Swallow</e></t>')
columns
ANIMAL_NAME varchar2(100) path 'text()')),
SEA_CREATURE as (
select 'Tuna' as CREATURE_NAME from dual
union all select 'Shark' from dual
union all select 'Dolphin' from dual
union all select 'Plankton' from dual)
select NONSEA_ANIMALS, SEA_ANIMALS
from (
select stringagg( ANIMAL_NAME) as NONSEA_ANIMALS
from ( (select * from ANIMAL)
minus (select CREATURE_NAME as ANIMAL_NAME from SEA_CREATURE))),
(select stringagg( ANIMAL_NAME) as SEA_ANIMALS
from ANIMAL
where ANIMAL_NAME in
(select CREATURE_NAME as ANIMAL_NAME from SEA_CREATURE))
Listing 2: (This works)
with ANIMAL as (
select 'Tuna' as ANIMAL_NAME from dual
union all select 'Cat' from dual
union all select 'Dolphin' from dual
union all select 'Swallow' from dual),
SEA_CREATURE as (
select 'Tuna' as CREATURE_NAME from dual
union all select 'Shark' from dual
union all select 'Dolphin' from dual
union all select 'Plankton' from dual)
select NONSEA_ANIMALS, SEA_ANIMALS
from (
select stringagg( ANIMAL_NAME) as NONSEA_ANIMALS
from ( (select * from ANIMAL)
minus (select CREATURE_NAME as ANIMAL_NAME from SEA_CREATURE))),
(select stringagg( ANIMAL_NAME) as SEA_ANIMALS
from ANIMAL
where ANIMAL_NAME in
(select CREATURE_NAME as ANIMAL_NAME from SEA_CREATURE));
Listing 3: Expected output for expressions in both Listings 1 & 2:
NONSEA_ANIMALS SEA_ANIMALS
-------------------------------
'Cat,Swallow' 'Tuna,Dolphin'
The Oracle banner is shown in Listing 4.
Listing 4: select * from v$version
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production
How is this craziness explained?
Update
Here is the explain plan ...
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------
| Id | Operation | Name |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TEMP TABLE TRANSFORMATION | |
| 2 | LOAD AS SELECT | |
| 3 | VIEW | |
| 4 | COLLECTION ITERATOR PICKLER FETCH| XMLSEQUENCEFROMXMLTYPE |
| 5 | LOAD AS SELECT | |
| 6 | UNION-ALL | |
| 7 | FAST DUAL | |
| 8 | FAST DUAL | |
| 9 | FAST DUAL | |
| 10 | FAST DUAL | |
| 11 | NESTED LOOPS | |
| 12 | VIEW | |
| 13 | SORT AGGREGATE | |
| 14 | VIEW | |
| 15 | MINUS | |
| 16 | SORT UNIQUE | |
| 17 | VIEW | |
| 18 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6666_765BCCBD |
| 19 | SORT UNIQUE | |
| 20 | VIEW | |
| 21 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6667_765BCCBD |
| 22 | VIEW | |
| 23 | SORT AGGREGATE | |
| 24 | HASH JOIN RIGHT SEMI | |
| 25 | VIEW | VW_NSO_1 |
| 26 | VIEW | |
| 27 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6667_765BCCBD |
| 28 | VIEW | |
| 29 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6666_765BCCBD |
----------------------------------------------------------------------------

ORA-03113 and ORA-6000 usually happens on using WITH clause query when
something fatal happened on execution.
Oracle's subquery factoring or WITH clause, can be overused at times. Oracle may create a global temporary table for every query inside WITH clause, for reusing the results. So, XMLTABLE() here, could have created another GTT here, and perhaps this crash the database.
COLLECTION ITERATOR PICKLER FETCH is something when fetched from a
PL/SL object. It returns pickled(packed and formatted) data
It might involve creation of some temp table beneath like I mentioned previously. So the subquery factoring and the PL/Sql array selection didnt go well.
I have also seen queries with nested UNION ALL in WITH getting crashed.
This is most a bug in Oracle, and should be reported to them.
Only way to get around this now, would be reforming the query. In our application, usage of WITH is strictly restricted(due to high CPU usage) for report only purposes executed as batch.

Related

Oracle 11g insert into select from a table with duplicate rows

I have one table that need to split into several other tables.
But the main table is just like a transitive table.
I dump data from a excel into it (from 5k to 200k rows) , and using insert into select, split into the correct tables (Five different tables).
However, the latest dataset that my client sent has records with duplicates values.
The primary key usually is ENI for my table. But even this record is duplicated because the same company can be a customer and a service provider, so they have two different registers but use the same ENI.
What i have so far.
I found a script that uses merge and modified it to find same eni and update the same main_id to all
|Main_id| ENI | company_name| Type
| 1 | 1864 | JOHN | C
| 2 | 351485 | JOEL | C
| 3 | 16546 | MICHEL | C
| 2 | 351485 | JOEL J. | S
| 1 | 1864 | JOHN E. E. | C
Main_id: Primarykey that the main BD uses
ENI: Unique company number
Type: 'C' - COSTUMER 'S' - SERVICE PROVIDERR
Some Cases it can have the same type. just like id 1
there are several other Columns...
What i need:
insert any of the main_id my other script already sorted, and set a flag on the others that they were not inserted. i cant delete any data i'll need to send these info to the costumer validate.
or i just simply cant make this way and go back to the good old excel
Edit: as a question below this is a example
|Main_id| ENI | company_name| Type| RANK|
| 1 | 1864 | JOHN | C | 1 |
| 2 | 351485 | JOEL | C | 1 |
| 3 | 16546 | MICHEL | C | 1 |
| 2 | 351485 | JOEL J. | S | 2 |
| 1 | 1864 | JOHN E. E. | C | 2 |
RANK - would be like the 1864 appears 2 times,
1st one found gets 1 second 2 and so on. i tryed using
RANK() OVER (PARTITION BY MAIN_ID ORDER BY ENI)
RANK() OVER (PARTITION BY company_name ORDER BY ENI)
Thanks to TEJASH i was able to come up with this solution
MERGE INTO TABLEA S
USING (Select ROWID AS ID,
row_number() Over(partition by eniorder by eni, type) as RANK_DUPLICATED
From TABLEA
) T
ON (S.ROWID = T.ID)
WHEN MATCHED THEN UPDATE SET S.RANK_DUPLICATED= T.RANK_DUPLICATED;
As far as I understood your problem, you just need to know the duplicate based on 2 columns. You can achieve it using analytical function as follows:
Select t.*,
row_number() Over(partition by main_id, eni order by company_name) as rnk
From your_table t

(Nested?) Select statement with MAX and WHERE clause

I'm cranking my head on a set of data in order to generate a report from a Oracle DB.
Data are in two tables:
SUPPLY
DEVICE
There is only one column that links the two tables:
SUPPLY.DEVICE_ID
DEVICE.ID
In SUPPLY, there are these data: (Markdown is not working well. it's supposed to show a table)
| DEVICE_ID | COLOR_TYPE | SERIAL | UNINSTALL_DATE |
|----------- |------------ |-------------- |--------------------- |
| 1232 | 1 | CAP857496 | 08/11/2016,19:10:50 |
| 5263 | 2 | CAP57421 | 07/11/2016,11:20:00 |
| 758 | 3 | CBO753421869 | 07/11/2016,04:25:00 |
| 758 | 4 | CC9876543 | 06/11/2016,11:40:00 |
| 8575 | 4 | CVF75421 | 05/11/2016,23:59:00 |
| 758 | 4 | CAP67543 | 30/09/2016,11:00:00 |
In DEVICE, there are columns that I've to select all (more or less), but each row is unique.
What i need to achieve is:
for each SUPPLY.DEVICE_ID and SUPPLY.COLOR_TYPE, I need the most recent ROW -> MAX(UNINSTALL_DATE)
JOINED with
more or less all the columns in DEVICE.
At the end I should have something like this:
| ACCOUNT_CODE | MODEL | DEVICE.SERIAL | DEVICE_ID | COLOR_TYPE | SUPPLY.SERIAL | UNINSTALL_DATE |
|-------------- |------- |--------------- |----------- |------------ |--------------- |--------------------- |
| BUSTO | MS410 | LM753 | 1232 | 1 | CAP857496 | 08/11/2016,19:10:50 |
| MACCHI | MX310 | XC876 | 5263 | 2 | CAP57421 | 07/11/2016,11:20:00 |
| ASL_COMO | MX711 | AB123 | 758 | 3 | CBO753421869 | 07/11/2016,04:25:00 |
| ASL_COMO | MX711 | AB123 | 758 | 4 | CC9876543 | 06/11/2016,11:40:00 |
| ASL_VARESE | X950 | DE8745 | 8575 | 4 | CVF75421 | 05/11/2016,23:59:00 |
So far, using a nested select like:
SELECT DEVICE_ID,COLOR_TYPE,SERIAL,UNINSTALL_DATE FROM
(SELECT SELECT DEVICE_ID,COLOR_TYPE,SERIAL,UNINSTALL_DATE
FROM SUPPLY WHERE DEVICE_ID = '123456' ORDER BY UNINSTALL_DATE DESC)
WHERE ROWNUM <= 1
I managed to get the highest value on the UNISTALL_DATE column after trying MAX(UNISTALL_DATE) or HIGHEST(UNISTALL_DATE).
I tried also:
SELECT SUPPLY.DEVICE_ID, SUPPLY.COLOR_TYPE, ....
FROM SUPPLY,DEVICE WHERE SUPPLY.DEVICE_ID = DEVICE.ID
and it works, but gives me ALL the items, basically it's a merge of the two tables.
When I try to narrow the data selected, i get errors or a empty result.
I'm starting to wonder that it's not possible to obtain this data and i'm starting to export the data in excel and work from there, but I wish someone can help me before giving up...
Thank you in advance.
for each SUPPLY.DEVICE_ID and SUPPLY.COLOR_TYPE, I need the most recent ROW -> MAX(UNINSTALL_DATE)
Use ROW_NUMBER function in this way:
SELECT s.*,
row_number() OVER (
PARTITION BY DEVICE_ID, COLOR_TYPE
ORDER BY UNINSTALL_DATE DESC
) As RN
FROM SUPPLY s
This query marks most recent rows with RN=1
JOINED with more or less all the columns in DEVICE.
Just join the above query to DEVICE table
SELECT d.*,
x.COLOR_TYPE,
x.SERIAL,
x.UNINSTALL_DATE
FROM (
SELECT s.*,
row_number() OVER (
PARTITION BY DEVICE_ID, COLOR_TYPE
ORDER BY UNINSTALL_DATE DESC
) As RN
FROM SUPPLY s
) x
JOIN DEVICE d
ON d.DEVICE_ID = x.DEVICE_ID AND x.RN=1
OK - so you could group by device_id, color_type and select max(uninstall_date) as well, and join to the other table. But you would miss the serial value for the most recent row (for each combination of device_id, color_type).
There are a few ways to fix that. Your attempt with rownum was close, but the problem is that you need to order within each "group" (by device_id, color_type) and get the first row from each group. I am sure someone will post a solution along those lines, using either row_number() or rank() or perhaps the analytic version of max(uninstall_date).
When you just need the "top" row from each group, you can use keep (dense_rank first/last) - which may be slightly more efficient - like so:
select device_id, color_type,
max(serial) keep (dense_rank last order by uninstall_date) as serial,
max(uninstall_date) as uninstall_date
from supply
group by device_id, color_type
;
and then join to the other table. NOTE: dense_rank last will pick up the row OR ROWS with the most recent (max) date for each group. If there are ties, that is more than one row; the serial will then be the max (in lexicographical order) among those rows with the most recent date. You can also select min, or add some order so you pick a specific one (you didn't discuss this possibility).
SELECT
d.ACCOUNT_CODE, d.DNS_HOST_NAME,d.IP_ADDRESS,d.MODEL_NAME,d.OVERRIDE_SERIAL_NUMBER,d.SERIAL_NUMBER,
s.COLOR, s.SERIAL_NUMBER, s.UNINSTALL_TIME
FROM (
SELECT s.DEVICE_ID, s.LAST_LEVEL_READ, s.SERIAL_NUMBER,TRUNC(s.UNINSTALL_TIME), row_number()
OVER (
PARTITION BY DEVICE_ID, COLOR
ORDER BY UNINSTALL_TIME DESC
) As RN
FROM SUPPLY s
WHERE s.UNINSTALL_TIME IS NOT NULL AND s.SERIAL_NUMBER IS NOT NULL
)
JOIN DEVICE d
ON d.ID = s.DEVICE_ID AND s.RN=1;
#krokodilko: thank you very much for your help. First query works. Modified it in order to remove junk, putting real columns name i need (yesterday evening i had no access to the DB) and getting only the data I need.
Unfortunately, when I join the two tables as you suggested I get error:
ORA-00904: "S"."RN": invalid identifier
00904. 00000 - "%s: invalid identifier"
If i remove s. before RN, the ORA-00904 moves back to s.DEVICE_ID.

add column check for format number to number oracle

I need to add a column to a table that check for input to be a max value of 999 to 999, like a soccer match score. How do I write this statement?
example:
| Score |
---------
| 1-2 |
| 10-1 |
|999-999|
| 99-99 |
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE SCORES (Score ) AS
SELECT '1-2' FROM DUAL
UNION ALL SELECT '10-1' FROM DUAL
UNION ALL SELECT '999-999' FROM DUAL
UNION ALL SELECT '99-99' FROM DUAL
UNION ALL SELECT '1000-1000' FROM DUAL;
Query 1:
SELECT SCORE,
CASE WHEN REGEXP_LIKE( SCORE, '^\d{1,3}-\d{1,3}$' )
THEN 'Valid'
ELSE 'Invalid'
END AS Validity
FROM SCORES
Results:
| SCORE | VALIDITY |
|-----------|----------|
| 1-2 | Valid |
| 10-1 | Valid |
| 999-999 | Valid |
| 99-99 | Valid |
| 1000-1000 | Invalid |

oracle faster paging query

I have two paging query that I consider to use.
First one is
SELECT * FROM ( SELECT rownum rnum, a.* from (
select * from members
) a WHERE rownum <= #paging.endRow# ) where rnum > #paging.startRow#
And the Second is
SELECT * FROM ( SELECT rownum rnum, a.* from (
select * from members
) a ) WHERE rnum BETWEEN #paging.startRow# AND #paging.endRow#
how do you think which query is the faster one?
I don't actually have availability of Oracle now but the best SQL query for paging is the following for sure
select *
from (
select rownum as rn, a.*
from (
select *
from my_table
order by ....a_unique_criteria...
) a
)
where rownum <= :size
and rn > (:page-1)*:size
http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html
To achieve a consistent paging you should order rows using a unique criteria, doing so will avoid to load for page X a row you already loaded for a page Y ( !=X ).
EDIT:
1) Order rows using a unique criteria means to order data in way that each row will keep the same position at every execution of the query
2) An index with all the expressions used on the ORDER BY clause will help getting results faster, expecially for the first pages. With that index the execution plan choosen by the optimizer doesn't needs to sort the rows because it will return rows scrolling the index by its natural order.
3) By the way, the fastests way to page result from a query is to execute the query only once and to handle all the flow from the application side.
Take a look at the execution plans, example with 1000 rows:
SELECT *
FROM (SELECT ROWNUM rnum
,a.*
FROM (SELECT *
FROM members) a
WHERE ROWNUM <= endrow#)
WHERE rnum > startrow#;
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 39000 | 3 (0)| 00:00:01 |
|* 1 | VIEW | | 1000 | 39000 | 3 (0)| 00:00:01 |
| 2 | COUNT | | | | | |
|* 3 | FILTER | | | | | |
| 4 | TABLE ACCESS FULL| MEMBERS | 1000 | 26000 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RNUM">"STARTROW#")
3 - filter("MEMBERS"."ENDROW#">=ROWNUM)
And 2.
SELECT *
FROM (SELECT ROWNUM rnum
,a.*
FROM (SELECT *
FROM members) a)
WHERE rnum BETWEEN startrow# AND endrow#;
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 39000 | 3 (0)| 00:00:01 |
|* 1 | VIEW | | 1000 | 39000 | 3 (0)| 00:00:01 |
| 2 | COUNT | | | | | |
| 3 | TABLE ACCESS FULL| MEMBERS | 1000 | 26000 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RNUM"<="ENDROW#" AND "RNUM">="STARTROW#")
Out of that I'd say version 2 could be slightly faster as it includes one step less. But I don't know about your indexes and data distribution so it's up to you to get these execution plans yourself and judge the situation for your data. Or simply test it.
A already answered in here But let me copypaste.
Just want to summarize the answers and comments. There are a number of ways doing a pagination.
Prior to oracle 12c there were no OFFSET/FETCH functionality, so take a look at whitepaper as the #jasonk suggested. It's the most complete article I found about different methods with detailed explanation of advantages and disadvantages. It would take a significant amount of time to copy-paste them here, so I want do it.
There is also a good article from jooq creators explaining some common caveats with oracle and other databases pagination. jooq's blogpost
Good news, since oracle 12c we have a new OFFSET/FETCH functionality. OracleMagazine 12c new features. Please refer to "Top-N Queries and Pagination"
You may check your oracle version by issuing the following statement
SELECT * FROM V$VERSION

Slow Update When Using Oracle PL/SQL Table

We're using a PL/SQL table (named pTable) to collect a number of ids to be updated.
However, the statement
UPDATE aTable
SET aColumn = 1
WHERE id IN (SELECT COLUMN_VALUE
FROM TABLE (pTable));
takes a long time to execute.
It seems that the optimizer comes up with a very bad execution plan, instead of using the index that is defined on id (as the primary key) it decides to use a full table scan on the aTable. pTable usually contains very few values (in most cases just one).
What can we do to make this faster? The best we've come up with is to handle low pTable.Count (1 and 2) as special cases, but that is certainly not very elegant.
Thanks for all the great suggestions. I wrote about this issue in my blog at http://smartercoding.blogspot.com/2010/01/performance-issues-using-plsql-tables.html.
You can try the cardinality hint. This is good if you know (roughly) the number of rows in the collection.
UPDATE aTable
SET aColumn = 1
WHERE id IN (SELECT /*+ cardinality( pt 10 ) */
COLUMN_VALUE
FROM TABLE (pTable) pt );
Here's another approach. Create a temporary table:
create global temporary table pTempTable ( id int primary key )
on commit delete rows;
To perform the update, populate pTempTable with the contents of pTable and execute:
update
(
select aColumn
from aTable aa join pTempTable pp on aa.id = pp.id
)
set aColumn = 1;
The should perform reasonably well without resorting to optimizer hints.
The bad execution plan is probably unavoidable (unfortunately). There is no statistics information for the PL/SQL table, so the optimizer has no way of knowing that there are few rows in it. Is it possible to use hints in an UPDATE? If so, you might force use of the index that way.
It helped to tell the optimizer to use the "correct" index instead of going on a wild full-table scan:
UPDATE /*+ INDEX(aTable PK_aTable) */aTable
SET aColumn = 1
WHERE id IN (SELECT COLUMN_VALUE
FROM TABLE (CAST (pdarllist AS list_of_keys)));
I couldn't apply this solution to more complicated scenarios, but found other workarounds for those.
You could try adding a ROWNUM < ... clause.
In this test a ROWNUM < 30 changes the plan to use an index.
Of course that depends on your set of values having a reasonable maximum size.
create table atable (acolumn number, id number);
insert into atable select rownum, rownum from dual connect by level < 150000;
alter table atable add constraint atab_pk primary key (id);
exec dbms_stats.gather_table_stats(ownname => user, tabname => 'ATABLE');
create type type_coll is table of number(4);
/
declare
v_coll type_coll;
begin
v_coll := type_coll(1,2,3,4);
UPDATE aTable
SET aColumn = 1
WHERE id IN (SELECT COLUMN_VALUE
FROM TABLE (v_coll));
end;
/
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
UPDATE ATABLE SET ACOLUMN = 1 WHERE ID IN (SELECT COLUMN_VALUE FROM TABLE (:B1 ))
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | | | 142 (100)| |
| 1 | UPDATE | ATABLE | | | | |
|* 2 | HASH JOIN RIGHT SEMI | | 1 | 11 | 142 (8)| 00:00:02 |
| 3 | COLLECTION ITERATOR PICKLER FETCH| | | | | |
| 4 | TABLE ACCESS FULL | ATABLE | 150K| 1325K| 108 (6)| 00:00:02 |
----------------------------------------------------------------------------------------------
declare
v_coll type_coll;
begin
v_coll := type_coll(1,2,3,4);
UPDATE aTable
SET aColumn = 1
WHERE id IN (SELECT COLUMN_VALUE
FROM TABLE (v_coll)
where rownum < 30);
end;
/
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------
UPDATE ATABLE SET ACOLUMN = 1 WHERE ID IN (SELECT COLUMN_VALUE FROM TABLE (:B1 ) WHERE
ROWNUM < 30)
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | | | 31 (100)| |
| 1 | UPDATE | ATABLE | | | | |
| 2 | NESTED LOOPS | | 1 | 22 | 31 (4)| 00:00:01 |
| 3 | VIEW | VW_NSO_1 | 29 | 377 | 29 (0)| 00:00:01 |
| 4 | SORT UNIQUE | | 1 | 58 | | |
|* 5 | COUNT STOPKEY | | | | | |
| 6 | COLLECTION ITERATOR PICKLER FETCH| | | | | |
|* 7 | INDEX UNIQUE SCAN | ATAB_PK | 1 | 9 | 0 (0)| |
---------------------------------------------------------------------------------------------------
I wonder if the MATERIALIZE hint in the subselect from the PL/SQL table would force a temp table instantiation and help the optimizer?
UPDATE aTable
SET aColumn = 1
WHERE id IN (SELECT /*+ MATERIALIZE */ COLUMN_VALUE
FROM TABLE (pTable));

Resources