We have an Oracle Enterprise Edition 10 installation and as its been explained to me by our DBAs, Oracle Enterprise installs include all extensions and you're simply licensed by what you use.
We've discovered we're using Oracle Spatial but we don't want to be. I can confirm for myself that its being used with this SQL:
select * from dba_feature_usage_statistics;
Unfortunately that's all I can find out. We have a large number of applications which use Spatial elements, but having asked all of our vendors they assure us their apps are using Oracle Locator (which is the free subset of Spatial).
So my question is simple - how do I discover exactly which app is using the Oracle Spatial extension?
Alternately (brought to light by ik_zelf's answer), how do I prove I'm only using the Locator subset of Spatial.
Check the sdo metadata:
select * from mdsys.sdo_geom_metadata_table where sdo_owner not in ('MDSYS', 'OE')
when you dig a little deeper in the dba_feature_usage_statistics you will find this query as part of the determination of what is being used and what not. The schema's MDSYS and OE are not counted, even when they have sdo objects.
There is a list of functionality that is part of Oracle Spatial vs. Oracle Locator on the Oracle website: http://docs.oracle.com/cd/B19306_01/appdev.102/b14255/sdo_locator.htm#SPATL340 - specifically pay attention to the section that lists things only available in Oracle Spatial.
The short story is that (basically) the following things are off the table for Locator:
Topology
Network data model
GeoRaster
Geocoding
In-built data mining functions
Linear referencing
Some spatial aggregation functionality
Some parts of the sdo_geom package
Storage, indexing, partitioning, sdo_util package, coordinate transformations and more are all fully within Locator. I would simply check the dba_source view for any stored procedures that use any of the prohibited functions.
For code outside of the database, I guess you have to take someone's word for it, but in my experience external applications tend to use their own methods rather than Oracle in-built features.
Related
We are facing one issue in our project i.e. Data verification issue.
The project is about Replication of data from Sybase to oracle DBs.
The table structures for Table A across Sybase, Oracle is same.
Same column and primary key combination across all the databases.
e.g. If Sybase has Table A with columns a, b and C
same table with same name and same columns will be available in different databses.
We are done with replication stuff part.But we faced some silent failure like data discrepancy just wondering if there will any tool already available for this.
Any information on his would be helpful. Thanks.
Sybase (now SAP) has a couple products that can be used for data comparisons and reconciliation:
rs_subcmp - an older, 32-bit tool that comes with the Sybase Replication Server product that can be used to compare data between
source and target; SQL reconciliation scripts can be generated from
the differences and then applied to the target to bring it in sync
with the source; if your tables are more than 1GB in size you can
still use rs_subcmp but you'll need to create multiple comparison
jobs (via where clauses) to work on different subsets of your tables
[I don't recall if rs_subcmp can be use for heterogeneous
replication setsup, eg, ASE-Oracle.]
Data Assurance (DA) - the newer, 64-bit product ... also from
Sybase ... which can also compare data and (re)sync the target(s)
from the source (either via SQL reconciliation scripts or directly);
DA is capable of handling comparisons between a handful of
different RDBMS products (eg, ASE-Oracle); I'm currently working on a
project where one of the requirements is to validate (and reconcile
where needed) 200+TB of data being migrated from Oracle to HANA and
I'm using DA for the validation/reconciliation portion of the project
As #TenG has hinted at with his answer, there's a good bit of effort involved to compare data and generate code to reconcile the differences. Rolling your own code is doable but will entail a lot of work. If you've got the money you'll likely find 3rd party tools can get most/all of the work done for you.
If you used a 3rd party product to replicate your data from Sybase to Oracle, you may want to see if the same vendor has a comparison/validation/reconciliation tool you could use.
I've worked on a few migration projects and a key part has always been data reconciliation.
I can only talk about the approaches we took, based on constraints around tools available and minimising downtime, and constraints of available space.
In all cases I took to writing scripts that worked on two levels - summary view and "deep dive". We couldn't find any tools readily available that did what we wanted in a timely enough manner. In fact even the migration tools we found had limitations (datapump, sqlloader, golden gate, etc) and hand coded scripts to handle the bits that we found to be lacking or too slow in the standard tools.
The summary view varied from project to project. It was part functional based (do the accounting figures for transactions match) for the users to verify, and part technical. For smaller tables we could just write simple reports and the diff was straight forward.
For larger tables we wrote technical reports that looked at bands of data (e.g group the PK into 1000s) collect all the column data and produce checksum, generating a report for each table like:
PK ID Range Start Checksum
----------------- -----------
100000 22773377829
200000 38938938282
.
.
Corresponding table pairs from each database were then were "diff"d against each other to highlight discrepancies. Any differences that were found could then be looked at in more detail.
The scripts were written in such a way to allow them to run in parallel looking at discrete bands. Te band ranges were tunable as well to get the best throughput. This obviously sped things up.
The scripts were shell scripts firing off sqlplus reports, and similar for the source database.
On one project there wasn't enough diskspace to do these reports, so I wrote a Java program that queried the two databases side by side, using block queues to fetch and compare rowsets. Being in memory meant this was super fast.
For the "deep dive" we looked at the details for key tables, or for tables that reports a checksum difference.
For the user reports, the users would specify what they wanted to see, and we wrote the reports accordingly.
On the last project, the only discrepancies found were caused by character set conversion issues (people names with accents weren't handled correctly).
On projects where the overall dataset was smaller we extracted the data to XML files and wrote a Java tool to processes pairs and report differences.
The SAP/Sybase rs_subcmp tool is pretty powerful and also pretty hard to use. For details see:
https://help.sap.com/viewer/075940003f1549159206fcc89d020515/16.0.3.3/en-US/feb58db1bd1c1014b134ef4efef25563.html?q=rs_subcmp
You have to pass it key field information, but once you do that, it can retry/restart the compare streams after transient differences. Pretty fancy.
rs_subcmp expects to work on Sybase data source. So to compare against Oracle, you'd probably have to setup one of those Sybase-to-Oracle gateway products ($$$$$).
Could you install the Oracle ODBC drivers and configure them to allow Sybase clients to access Oracle? I'm guessing not (but that's outside the range of my experience).
Note the "-h" option for rs_subcmp. The docs just say it runs a "fast comparison", but what it's actually doing is running queries using the hashbytes() function. Something like:
select keyfield1,keyfield2, hashbytes("Md5",datacol1,datacol2,datacol3)
from mytable
So this sort of query might be good for the "summary view" type comparison discussed above (if the Oracle STANDARD_HASH() function output matches up with the Sybase hashbytes() function (again, outside my experience))
Note, as of ASE 16, there was a bug with the hash() & hashbytes() functions running the Md5 hash option against large varbinary columns where they could use up all procedure cache, potentially crashing the server (CR 811073)
We are planning a new system for a client in ORACLE 11g. I've been mostly in the Sql Server world for several years, and am not really current on the latest ORACLE updates.
One particular feature I'm wondering if ORACLE has added in by this point is some sort of logical "container" for database objects, akin to Sql Server's SCHEMA.
Trying to use ORACLE's schemas like Sql Server winds up being a disaster for code comparisons when trying to push from dev > test > live.
Packages are sort of similar, except that you can't put tables into a package (so they really only work for logical code grouping).
The only other option I am aware of is the archaic practice of having to prefix object names with a "schema" prefix, i.e. RPT_REPORTS, RPT_PARAMETERS, RPT_LOGS, RPT_USERS, RPT_RUN_REPORT(), with the prefix RPT_ denoting that these are all the objects dealing with our reporting engine say. Writing a system like this feels like we never left the 8.3 file-naming age.
Is there by this point in time any cleaner, more direct way of logically grouping related objects together in ORACLE?
Oracle's logical container for database objects IS the schema. I don't know how much "cleaner" and "more direct" you can get! You are going to have to do a paradigm shift here. Don't try to think in SQL Server terms, and force a solution that looks like SQL Server on Oracle. Get familiar with what Oracle does and approach your problems from that perspective. There should be no problem pushing from dev to test to production in Oracle if you know what you're doing.
It seems you have a bit of a chip on your shoulder about Oracle when you use terms like "archaic practice". I would suggest you make friends with Oracle's very rich and powerful feature set by doing some reading, since you're apparently already committed to Oracle for this project. In particular, pick up a copy of "Effective Oracle By Design" by Tom Kyte. Once you've read that, have a look at "Expert Oracle Database Architecture" by the same author for a more in-depth look at how Oracle works. You owe it to your customer to know how to use the tool you've been handed. Who knows? You might even start to like it. Think of it as another tool in your toolchest. You're not married to SQL Server and you're not being unfaithful by using Oracle ;-)
EDIT:
In response to questions by OP:
I'm not sure why that is a logistical problem. They can be thought of as separate databases, but physically they are not. And no, you do not need a separate data file for each schema. A single datafile is often used for all schemas.
If you want a "nice, self-contained database" ala SQL Server, just create one schema to store all your objects. End of problem. You can create other users/schemas, just don't give them the ability to create objects.
There are tools to compare objects and data, as in the PL/SQL Developer compare. Typically in Oracle you want to compare schemas, not entire databases. I'm not sure why it is you want to have multiple schemas each with their own objects anyway. What does is buy you to do that? Keep your objects (tables, triggers, code, views, etc.) in one schema.
Most forums cite minor differences in speed, backup, etc.
It's about time someone tell how the two differ when it comes to GUI data validation.
Do this 2 Database always depend on java(or other software), or do they have the ability to create a user interface the accepts only valid input. Things like: positive numbers only, age between 1 to 100 only, email must be correct. I would be scared if my software accepts 500 years old for age.
Both offer native development tools that are roughly comparable and able to what you ask about.
Both also offer the ability for all main languages to interact with the RDBMS and so therefore the ability to to do the type of thing you discuss is as diverse as the options of a range of languages including Java, .Net, Ruby, Python, C++, VB, etc, etc
However what they don't really offer is a simple Access type 'forms and tables' type RAD tool. In simple terms the increased flexibility and power of both Oracle and DB2 comes at the price of simplicity.
Neither database DEPENDS on Java for implementing field level constraints. Data constraints can be implemented directly at the database level, and it is good practice to do so.
But you also need field level validation - users do not want to get constraint violation errors on insert.
As for tools that generate GUI applications from the database itself - I don't see that as an Oracle vs DB2 database question - it's more Oracle Apex vs IBMs equivalent - but even within Oracle you've got Forms (deceased), JDeveloper, Apex.
please help me connect spatials to oracle 10g XE
where can i find fun things to do with oracle spatial?
First, bookmark the reference, these are invaluable when you are writing spatial queries. If you it to be fast (particularly with joins) make sure you use SDO_RELATE (it was in the order of 100-1000x faster for spanning linestrings for me [over SDO_GEOM.RELATE oops]).
Second, Download SQL Developer (don't download 1.x, you will hate it), it will list the geometry column in its component parts instead of barfing at it like most utilities I have tried. So you don't need to spend time looking at the raw points, etc. When creating a geometry column, select the complex option, go to the MDSYS schema then scroll to SDO_GEOMETRY. When creating indexes for the spatial column click domain -> MDSYS then you should have RTREE and SPATIAL types. SDO_GEOMETRY also works as a type so you can pass it to and from procedures and functions within oracle.
As for what to do, that is up to you. I have done everything from making a service availability tool to writing utilities to span over linestrings with stop conditions. In most cases it will depend on your target field as to what is considered nifty and shiny.
Perhaps this might be of help:
Build a Google Earth Interface on Oracle Database XE
I am developing a enterprise software for a big company using Oracle. Major processing unit is planned to be developed in PL/SQL. I am wondered if there is any ORM like Hibernate for Java, but the one for PL/SQL. I have some ideas how to make such a framework using PL/SQL and Oracle system tables, but it is interesting - why no one have done this before? What do you think will that be effective in speed and memory consumption? Why?
ORMs exist to provide an interface between a database-agnostic language like Java and a DBMS like Oracle. PL/SQL in contrast knows the Oracle DBMS intimately and is designed to work with it (and a lot more efficiently than Java + ORM can). So an ORM between PL/SQL and the Oracle DBMS would be both superfluous and unhelpful!
Take a read through these two articles - they contain some interesting points
Ask Tom - Relational VS Object Oriented Database Design
Ask Tom - Object relational impedance mismatch
As Tony pointed out ORMs really serve as helper between the App and Db context boundaries.
If you are looking for an additional level of abstraction at the database layer you might want to look into table encapsulation. This was a big trend back in the early 2000s. If you search you will find a ton of whitepapers on this subject.
Plsqlintgen still seems to be around at http://sourceforge.net/projects/plsqlintgen/
This answer has some relevant thoughts on the pros and cons of wrapping your tables in pl/sql TAPIs (Table APIs) for CRUD operations.
Understanding the differences between Table and Transaction API's
There was also a good panel discussion on this at last years UK Oracle User Group - the overall conclusion was against using table APIs and for transaction APIs, for much the same reason - the strength of pl/sql is the procedural control of SQL statements, while TAPIs push you away from writing set-based SQL operations and towards row-by-row processing.
The argument for TAPI is where you may want to enforce some kind of access policy, but Oracle offers a lot of other ways to do this (fine-grained access control, constraints, triggers on insert/update/etc can be used to populate defaults and enforce that the calling code is passing a valid request).
I would definitely advise against wrapping tables in PL/SQL object types.
A lot of the productivity with pl/sql comes from the fact that you can easily define things in terms of the underlying database structure - a row record type can be simply defined as %ROWTYPE, and will be automatically impacted when the table structure changes.
myRec myTable%ROWTYPE
INSERT INTO table VALUES myRec;
This also applies to collections based over these types, and there are powerful bulk operations that can be used to fetch & insert whole collections.
On the other hand, object types must be explicitly impacted each time you want to change them - every table change would require the object type to be impacted and released, doubling your work.
It can also be difficult to release changes if you are using inheritance and collections of types (you can 'replace' a package, but cannot replace a type once it is used by another type).
This isn't putting OO PL/SQL down - there are places where it definitely simplifies code (i.e. avoiding code duplication, anywhere you would clearly benefit from polymorphism) - but it is best to understand and play to the strengths of the language, and the main strength is that the language is tightly-coupled to the underlying DB.
That said, I do often find myself creating procedures to construct a default record, insert a record, etc - often enough to have editor macros for it - but I've never found a good argument for automatically generating this code for all tables (a good way to create a lot of unused code??)
Oracle is a Relation database and also has the ability to work as an object-oriented database as well. It does this by building an abstraction layer (fairly automatically) on top of the relational structure. This would seemingly eliminate the need for any "tool" as it is already built-in.