Collections: Nested Tables- data not displaying - oracle

I am working with Collections Nested Tables in SQL Developer. When I select from a table that contains a nested table column, it only returns the container holder and not the data.
Output:
COLLECTION_USER.T_NESTED_TABLE_ADDRESS([COLLECTION_USER.T_ADDRESS],[COLLECTION_USER.T_ADDRESS])
Once I export the results. The export shows everything correctly.
Output:
COLLECTION_USER.T_NESTED_TABLE_ADDRESS(COLLECTION_USER.T_ADDRESS('2 State Street','Beantown','MA','12345'),COLLECTION_USER.T_ADDRESS('4 Hill Street','Lost Town','CA','54321'))
Has anyone else seen this? I think I just need to upgrade SQL Developer but I am just asking.

From an online example since you didn't provide your source code/SQL
CREATE OR REPLACE TYPE MY_TAB_T AS
TABLE OF VARCHAR2(30);
/
CREATE TABLE NESTED_TABLE (
ID NUMBER,
COL1 MY_TAB_T
)
NESTED TABLE COL1 STORE AS COL1_TAB;
INSERT INTO NESTED_TABLE VALUES (
1,
MY_TAB_T(
'A'
)
);
INSERT INTO NESTED_TABLE VALUES (
2,
MY_TAB_T(
'B', 'C'
)
);
INSERT INTO NESTED_TABLE VALUES (
3,
MY_TAB_T(
'D', 'E',
'F'
)
);
COMMIT;
select * from nested_table;
If you double-click on the nested table column, you'll see a Pencil Icon, and then clicking that -
This is using version 19.4, but I'm not aware of any bugs in this area in the recent past, although my memory is usually suspect at best.

Related

Oracle access varray elements in SQL

I'm playing around with array support in Oracle and hit a roadblock regarding array access within a SQL query. I'm using the following schema:
create type smallintarray as varray(10) of number(3,0);
create table tbl (
id number(19,0) not null,
the_array smallintarray,
primary key (id)
);
What I would like to do is get the id and the first element i.e. at index 1 of the array. In PostgreSQL I could write select id, the_array[1] from tbl t but I don't see how I could do that with Oracle. I read that array access by index is only possible in PL/SQL, which would be fine if I could return a "decorated cursor" to achieve the same result through JDBC, but I don't know if that's possible.
DECLARE
c1 SYS_REFCURSOR;
varr smallintarray2;
BEGIN
OPEN c1 FOR SELECT t.id, t.THE_ARRAY from tbl t;
-- SELECT t.THE_ARRAY INTO varr FROM table_with_enum_arrays2 t;
-- return a "decorated cursor" with varr(1) at select item position 1
dbms_sql.return_result(c1);
END;
You can do this in plain SQL; it's not pretty, but it does work. You would prefer that Oracle had syntax to hide this from the programmer (and perhaps it does, at least in the most recent versions; I am still stuck at 12.2).
select t.id, q.array_element
from tbl t cross apply
( select column_value as array_element,
rownum as ord
from table(the_array)
) q
where ord = 1
;
EDIT If order of generating the elements through the table operator is a concern, you could do something like this (in Oracle 12.1 and higher; otherwise the function can't be part of the query itself, but it can be defined on its own):
with
function select_element(arr smallintarray, i integer)
return number
as
begin
return arr(i);
end;
select id, select_element(the_array, 1) as the_array_1
from tbl
/
First of all, please don't do that on production. Use tables instead of storing arrays within a table.
Answer to your question is to use column as a table source
SELECT t.id, ta.*
from tbl t,
table(t.THE_ARRAY) ta
order by column_value
-- offset 1 row -- in case if sometime you'll need to skip a row
fetch first 1 row only;
UPD: as for ordering the array I can only say playing with 2asc/desc" parameters provided me with results I've expected - it has been ordered ascending or descending.
UPD2: found a cool link to description of performance issues might happen

Oracle CLOB column and LAG

I'm facing a problem when I try to use LAG function on CLOB column.
So let's assume we have a table
create table test (
id number primary key,
not_clob varchar2(255),
this_is_clob clob
);
insert into test values (1, 'test1', to_clob('clob1'));
insert into test values (2, 'test2', to_clob('clob2'));
DECLARE
x CLOB := 'C';
BEGIN
FOR i in 1..32767
LOOP
x := x||'C';
END LOOP;
INSERT INTO test(id,not_clob,this_is_clob) values(3,'test3',x);
END;
/
commit;
Now let's do a select using non-clob columns
select id, lag(not_clob) over (order by id) from test;
It works fine as expected, but when I try the same with clob column
select id, lag(this_is_clob) over (order by id) from test;
I get
ORA-00932: inconsistent datatypes: expected - got CLOB
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 1 Column: 16
Can you tell me what's the solution of this problem as I couldn't find anything on that.
The documentation says the argument for any analytic function can be any datatype but it seems unrestricted CLOB is not supported.
However, there is a workaround:
select id, lag(dbms_lob.substr(this_is_clob, 4000, 1)) over (order by id)
from test;
This is not the whole CLOB but 4k should be good enough in many cases.
I'm still wondering what is the proper way to overcome the problem
Is upgrading to 12c an option? The problem is nothing to do with CLOB as such, it's the fact that Oracle has a hard limit for strings in SQL of 4000 characters. In 12c we have the option to use extended data types (providing we can persuade our DBAs to turn it on!). Find out more.
Some of the features may not work properly in SQL when using CLOBs(like DISTINCT , ORDER BY GROUP BY etc. Looks like LAG is also one of them but, I couldn't find anywhere in docs.
If your values in the CLOB columns are always less than 4000 characters, you may use TO_CHAR
select id, lag( TO_CHAR(this_is_clob)) over (order by id) from test;
OR
convert it into an equivalent SELF JOIN ( may not be as efficient as LAG )
SELECT a.id,
b.this_is_clob AS lagging
FROM test a
LEFT JOIN test b ON b.id < a.id;
Demo
I know this is an old question, but I think I found an answer which eliminates the need to restrict the CLOB length and wanted to share it. Utilizing CTE and recursive subqueries, we can replicate the lag functionality with CLOB columns.
First, let's take a look at my "original" query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
)
SELECT tt.order_by_col,
tt.clob_col,
LAG(tt.clob_col) OVER (ORDER BY tt.order_by_col)
FROM test_table tt;
As expected, I get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
Now, lets look at the modified query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
),
initial_pull AS
(
SELECT tt.order_by_col,
LAG(tt.order_by_col) OVER (ORDER BY tt.order_by_col) AS PREV_ROW,
tt.clob_col
FROM test_table tt
),
recursive_subquery (order_by_col, prev_row, clob_col, prev_clob_col) AS
(
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, NULL
FROM initial_pull ip
WHERE ip.prev_row IS NULL
UNION ALL
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, rs.clob_col
FROM initial_pull ip
INNER JOIN recursive_subquery rs ON ip.prev_row = rs.order_by_col
)
SELECT rs.order_by_col, rs.clob_col, rs.prev_clob_col
FROM recursive_subquery rs;
So here is how it works.
I create the TEST_TABLE, this really is only for the example as you should already have this table somewhere in your schema.
I create a CTE of the data I want to pull, plus a LAG function on the primary key (or a unique column) in the table partitioned and ordered in the same way I would have in my original query.
Create a recursive subquery using the initial row as the root and descending row by row joining on the lagged column. Returning both the CLOB column from the current row and the CLOB column from its parent row.

Oracle PL/SQL Use Merge command on data from XML Table

I have a PL/SQL procedure that currently gets data from an XML service and only does inserts.
xml_data := xmltype(GET_XML_F('http://test.example.com/mywebservice');
--GET_XML_F gets the XML text from the site
INSERT INTO TEST_READINGS (TEST, READING_DATE, CREATE_DATE, LOCATION_ID)
SELECT round(avg(readings.reading_val), 2),
to_date(substr(readings.reading_dt, 1, 10),'YYYY-MM-DD'), SYSDATE,
p_location_id)
FROM XMLTable(
XMLNamespaces('http://www.example.com' as "ns1"),
'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime') readings
GROUP BY substr(readings.reading_dt,1,10), p_location_id;
I would like to be able to insert or update the data using a merge statement in the event that it needs to be re-run on the same day to find added records. I'm doing this in other procedures using the code below.
MERGE INTO TEST_READINGS USING DUAL
ON (LOCATION_ID = p_location_id AND READING_DATE = p_date)
WHEN NOT MATCHED THEN INSERT
(TEST_reading_id, site_id, test, reading_date, create_date)
VALUES (TEST_readings_seq.nextval, p_location_id,
p_value, p_date, SYSDATE)
WHEN MATCHED THEN UPDATE
SET TEST = p_value;
The fact that I'm pulling it from an XMLTable is throwing me off. Is there way to get the data from the XMLTable while still using the (much cleaner) merge syntax? I would just delete the data beforehand and re-import or use lots of conditional statements, but I would like to avoid doing so if possible.
Can't you simply put your SELECT into MERGE statement?
I believe, this should look more less like this:
MERGE INTO TEST_READINGS USING (
SELECT
ROUND(AVG(readings.reading_val), 2) AS test
,TO_DATE(SUBSTR(readings.reading_dt, 1, 10),'YYYY-MM-DD') AS reading_date
,SYSDATE AS create_date
,p_location_id AS location_id
FROM
XMLTable(
XMLNamespaces('http://www.example.com' as "ns1")
,'/ns1:test1/ns1:series1/ns1:values1/ns1:value'
PASSING xml_data
COLUMNS
reading_val VARCHAR2(50) PATH '.',
reading_dt VARCHAR2(50) PATH '#dateTime'
) readings
GROUP BY
SUBSTR(readings.reading_dt,1,10)
,p_location_id
) readings ON (
LOCATION_ID = readings.location_id
AND READING_DATE = readings.reading_date
)
WHEN NOT MATCHED THEN
...
WHEN MATCHED THEN
...
;

Oracle: What is the data type of ORA_ROWSCN?

What is the data type of ORA_ROWSCN? It seems to be NUMBER but I cannot find it specified in the documentation.
declare
myscn ???;
begin
select ora_rowscn into myscn from t where ...;
end;
It is a NUMBER. You can see yourself by creating a simple view and describing it (or selecting from *_tab_columns). Here's a simple sqlfiddle demonstration
create table foo (
col1 number
);
create view vw_foo
as
select col1, ora_rowscn scn
from foo;
select *
from user_tab_cols
where table_name = 'VW_FOO';
If you want more detail than you'd probably ever care about on the format of the SCN (system change number), here is one decent article

can not update user_sdo_geom_metadata when creating spatial database in oracle

i was using the oracle 11g and try to create a spatial database, i just copied the sample code from oracle document
but when it comes to update the metadata part, it gave a duplicate entry error, and i tried delete from user_sdo_geom_metadata and it didnot give any error, and then tried to insert again, still got the duplicate entry error. I also tried select * from user_sdo_geom_metadata but got nothing.
any one have idea why? thanks
THE sample code:
CREATE TABLE cola_markets (
mkt_id NUMBER PRIMARY KEY,
name VARCHAR2(32),
shape SDO_GEOMETRY);
INSERT INTO cola_markets VALUES(
1,
'cola_a',
SDO_GEOMETRY(
2003, -- two-dimensional polygon
NULL,
NULL,
SDO_ELEM_INFO_ARRAY(1,1003,3), -- one rectangle (1003 = exterior)
SDO_ORDINATE_ARRAY(1,1, 5,7) -- only 2 points needed to
-- define rectangle (lower left and upper right) with
-- Cartesian-coordinate data
));
INSERT INTO user_sdo_geom_metadata
(TABLE_NAME,
COLUMN_NAME,
DIMINFO,
SRID)
VALUES (
'cola_markets',
'shape',
SDO_DIM_ARRAY( -- 20X20 grid
SDO_DIM_ELEMENT('X', 0, 20, 0.005),
SDO_DIM_ELEMENT('Y', 0, 20, 0.005)
),
NULL -- SRID
);
error cause: there are duplicate entries for the give table and column value pair in the user_sdo_geom_metadata view.
Just to be sure, what does the following show ?
SELECT * FROM USER_SDO_GEOM_METADATA;
Object names in dictionary views are stored in uppercase. So if you want to remove existing entries from the spatial metadata you need to do this (notice the uppper case table name):
DELETE FROM USER_SDO_GEOM_METADATA WHERE TABLE_NAME = 'COLA_MARKETS';
COMMIT;

Resources