I have the same database on 2 different Oracle servers, one is 11.2.0.1.0 and the other is 11.2.0.4.0.
I have the same 2 geometry tables in both databases and run the following query on both servers. When run on an 11.2.0.1.0 version of Oracle, the query runs for a few minutes and I get results, the same query when run on 11.2.0.4.0 runs for about 3 seconds and returns no results.
The BLPUs table holds 36 million points and the PD_B2 table holds a polygon. I am trying to find all the points that fall in the polygon.
Other spatial queries do return rows but it takes hours and hours whereas the table join suggested in the Oracle Spatial documentation, takes 15 minutes to return all the points.
SELECT /*+ ordered */ a.uprn
FROM TABLE(SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC','mask=ANYINTERACT')) c, blpus a, PD_B2 b
WHERE c.rowid1 = a.rowid
AND c.rowid2 = b.rowid;
The spatial queryies below return SDO_ROWIDSET() when run on the 11.2.0.4 server
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC','mask=ANYINTERACT')
from dual;
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC')
from dual;
On the 11.2.0.1 server they return results.
I have discovered that a much smaller table of points will work on 11.2.0.4 so it seems that there is a size limit on 11.2.0.4 when using SDO_JOIN where as 11.2.0.1 seems to cope with the large table.
Does anyone know why this is or if there is an actual limit on table size when using SDO_JOIN?
This is strange. I see no reason why SDO_JOIN will not work the same way in 11.2.0.4. At least I have not seen that sort of behavior before. It looks like a bug to me and I suggest you file a service request with Oracle Support so we can take a look. You may need to provide a dump of the tables - or at least of a small enough subset that demonstrates the problem.
That said there are a few things to check: did you apply the 11.2.0.4 patch on the same database ? I.e. nothing changed in terms of table structures or content, grants, etc ?
When you say that the query returns no rows, does it do so immediately ? Or does it perform some processing before completing without anything being returned ?
How large is the PD_B2 table ?
What happens when you do:
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC','mask=ANYINTERACT')
from dual;
Does this also return nothing ?
What happens if you do
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC')
from dual;
Both should return something that looks like this:
SDO_ROWIDSET(SDO_ROWIDPAIR('AAAW3LAAGAAAADmAAu', 'AAAW3TAAGAAAAg7AAC'), SDO_ROWIDPAIR('AAAW3LAAGAAAADmABE', 'AAAW3TAAGAAAAgrAAA'),...)
You will see this if you run the query in sqlplus. [If you use a GUI (like TOAD or SQLDeveloper) then you may not see that. All those GUIs have problems dealing with objects or arrays.]
But the fact that your 11.2.0.4 tests complete very quickly probably means that you get an empty result back, maybe like this:
SDO_ROWIDSET()
and that confirms that something did not work.
Now, from what you say PD_B2 only contains one row ? If so then there is no reason whatsoever to go via SDO_JOIN. A straightforward spatial query is easier to write. SDO_JOIN only makes sense when both tables being joined contain multiple rows. Then again, if one of the tables is very small (like the PD_B2 table in your case), then it anyway falls back to a simple query.
A simple query would look like this:
SELECT a.uprn
FROM blpus a, PD_B2 b
WHERE sdo_anyinteract (a.geoloc, b.geoloc) = 'TRUE';
Does this return what you expect in 11.2.0.4 ?
You may also want to examine the cost of the query and the query plan used. In sqlplus, do this:
set timing on
set autotrace traceonly
then run the above query. The effect is that sqlplus will not display any results - but it will still fetch and format the output: it will just not print them. At the end you will get a printout of the query plan used as well as some execution statistics and the elapsed time. Please add those results to your question.
Running the query from within your application should have a similar profile: similar response time and database-side costs - assuming you do fetch all the 1.3 million rows.
BTW, what do you do with those results ? Do you show them out as a report ? Do you save them into a table for later analysis ? Surely you do not want to show them all on a map ?
To clarify: SDO_JOIN is only for matching many to many geometries. For matching one to many, the simple SDO_ANYINTERACT() is what you need. As a matter of fact, SDO_JOIN will automatically fall back to a simple SDO_ANYINTERACT when one of the sets is very much smaller than the other. I can't see how SDO_JOIN could be faster in your circumstances (i.e. when searching for all objects that match one object) since it will perform the same as an SDO_ANYINTERACT and will require extra joins to the two tables.
Going back to the original issue - that of a difference in behavior in SDO_JOIN between 11.2.0.1 and 11.2.0.4 that IMO is definitely a bug that you need to report to Oracle Support.
1
Hello community,
we are currently evaluating possibilities to store generic data structures. We found that at least from a functional point of view Oracle XMLType is a good alternative to the good old BLOB. Because you can query and update single fields from the xml and also create indexes on XPath expressions.
We are a bit worried about the performance of XMLType. Especially the select performance in interesting. We have queries that select multiple data structures at once. These need to be fast.
Such a query looks something like this
SELECT DOC_VALUE.getClobval() AS XML_VALUE FROM XML_TABLE WHERE d.ID = IN ('1','2',...);
Our XML documents are 7 to 8 KB in size. We are on Oracle 11g and create the XML column with type 'XMLTYPE'
Do you have experience about the performance of selects on xml type columns. What overall experiences do you have with XMLTYPE. Is this a robust and fast Oracle feature? Or is it rather something immature and experimental.
Regards, Mathias
XMLDB is a strong and reliable feature we have since 9i.
Flexible-schema XMLType columns are implemented over hidden CLOB columns, while fixed-schema XMLType are decomposed into hidden tables and views; after all XMLType is as reliable as standard objects.
Performance would vary on your use, but just reading the whole XML on an XMLType is fast as reading the same on a classical CLOB, because actually it really just read and give you the XML code stored on CLOB as-is.
You can assign an XDB.XMLIndex to an XMLType, and performance will be good after that. We had a table with only 13,000 records, each containing an XML Clob that wasn't too big, maybe 40 pages each if it was printed on text, and running simple queries without an XMLIndex often took over 10 minutes, and running more complex queries often took 2x or 3x longer!
With an XMLIndex, the performance was on par with a typical table (On the order of milliseconds for full table scans). The XMLIndex can be as complex as you need it to be, I tried this naive one and it worked fine: (I say naive because I do not fully understand the inner workings of this index type!)
CREATE INDEX myschema.my_idx_name ON myschema.mytable
(SYS_MAKEXML(0,"SYS_SOMETHING$"))
INDEXTYPE IS XDB.XMLINDEX
NOPARALLEL;
You should understand the various XMLIndex options available to you, and choose according to your needs. Documentation:
https://docs.oracle.com/cd/B28359_01/appdev.111/b28369/xdb_indexing.htm#CHDJECDA
I have a number of files containing vast amounts of Insert statements (which were generated by Toad for Oracle) which I need to run on a Postgresql database.
Sounds simple I know but there are also oracle specific spatial data types in there which are hampering my efforts. I tried to use a number of tools for this from SwisSQL to SDO2Shp to migrate the data and none have been any help whatsoever so my only plan left to try is to come up with a C# program to open the file, replace the oracle specific types with types that will work in postgis and then save the file again. The problem is I have no idea which types I could substitute with the Oracle ones or the format or syntax I must use.
I am very new to postgresql and postgis and my oracle knowledge is also limited as I had previously used SQL Server.
Here is an example of the Insert statement. They will all have the same format as this as the tables are the same but with different data for different zoom levels on a map.
Insert into CLUSTER_1000M
(CLUSTER_ID, CELL_GEOM, CELL_CENTROID)
Values
(4410928,
"MDSYS"."SDO_GEOMETRY"(2003,81989,NULL,
"MDSYS"."SDO_ELEM_INFO_ARRAY"(1,1003,3,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL),
"MDSYS"."SDO_ORDINATE_ARRAY"(80000,106280,81000,107280,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)),
"MDSYS"."SDO_GEOMETRY"(2001,81989,
"MDSYS"."SDO_POINT_TYPE"(80500,106780,NULL),NULL,NULL));
How can I get this into a format that will work with postgis?
I have no idea on how the Oracle GIS implementation works:
But looking at the data, I don't think conversion will be possible (it might be, but the effort might be huge).
Look at the way PostGIS defines Geometry
INSERT INTO geotable ( the_geom, the_name )
VALUES ( ST_GeomFromText('POINT(-126.4 45.32)', 312), 'A Place');
PostGIS follows standards on how to display/store the data and offers methods do assist the developer to do so. This conversion is mostly with functions that have a *from* in their name. So to create the proper data from a line, the output is similar to aline
SELECT ST_LineFromWKB(ST_AsBinary(ST_GeomFromText('LINESTRING(1 2, 3 4)'))) AS aline,
ST_LineFromWKB(ST_AsBinary(ST_GeomFromText('POINT(1 2)'))) IS NULL AS null_return;
aline | null_return
------------------------------------------------
010200000002000000000000000000F ... | t
Judging from your example output from Oracle, the format is pretty different and might not be convertable (if Oracle isn't offerent something that is able to stick to the standard).
On the other hand, when looking at the Oracle example
INSERT INTO cola_markets VALUES(
1,
'cola_a',
SDO_GEOMETRY(
2003, -- two-dimensional polygon
NULL,
NULL,
SDO_ELEM_INFO_ARRAY(1,1003,3), -- one rectangle (1003 = exterior)
SDO_ORDINATE_ARRAY(1,1, 5,7) -- only 2 points needed to
-- define rectangle (lower left and upper right) with
-- Cartesian-coordinate data
)
);
you might be able to replace some of the Oracle names with the ones for PostGIS, so SDO_ORDINATE_ARRAY(1,1, 5,7) might turn into something like ST_GeomFromText(LINESTRING(1 1, 5 7))
I have two schemas: A and B (Oracle 9). At the A there is a dblink to B. At the B there is a package, that i calls from A. Procedures in B package can returns varying count results and i think that returning a collection is a better way for this reason.
create type B.tr_rad as object (
name varchar2(64)
,code number
,vendor number
,val varchar2(255)
,num number
);
create type B.tt_rad as varray(256) of B.tr_rad;
But from A scheme I cannot use tt_rad type because using SQL-types by dblink is not supported. DBMS_SQL is not supported cursors. Create types with same OID is impossible.
I think to use temporary tables. But firstly it is not that good (after the remote function returns the value, calling side must select collection from remote table). And there are fears of a slowdown of work with temporary tables.
Maybe who knows the alternative interaction?
I've had similar problems in the past. Then I came to the conclusion that fundamentally Oracle's db links are "broken" for anything but simple SQL types (especially UDT's, CLOBS may have problems, XMLType may as well). If you can get the OID solution working then good luck to you.
The solution I resorted to was to use a Java Stored procedure, instead of the DB Link.
Characteristics of the Java Stored Procedure:
Can return a "rich set of types", just about all of the complex types (UDT's, tables/arrays/varrays) see Oracle online documentation for details. Oracle does a much better job of marshalling complex (or rich) types from java, than from a DBLink.
Stored Java can acquire the "default connection" (runs in the same session as the SQL connection to the db - no authentication issues).
Stored Java calls the PL/SQL proc on the remote DB, and the java JDBC layer does the marshaling from the remote DB.
Stored Java packages up the result and returns the results to the SQL or PL/SQL layer.
It's a bit of work, but if you have a bit of java, you should be able to "cut and paste" a solution together from the Oracle documentation and sample.
I hope this helps.
See this existing discussion
referencing oracle user defined types over dblink
An alternative interaction is to have one database with schemas A and B instead of two databases with a database link.
My solution.
On the side B i create temporary table like the collection record. At the A side i have a DBMS_SQL wrapper that calls procedure over dblink. This procedure writes result collection in the temporary table. After successful completion remote procedure i select results from remote temporary table and transform it to local collection type.
Limitations
1. the need for permanent object synchronization.
2. impossibility use A-side procedure (that call remote procedure) in SQL query.
3. the complexity of using.