I have several externally supplied tables which I can't modify. In my case these are things like the built in Oracle tables.
What I have is several entities which map on to these tables, but when I do my DDL generation I don't want them to be generated. Is there an annotation or an attribute I can set to ignore certain entities in the DDL generation?
You could simply switch to "create" ddl from "drop-create". The "create" calls for the existing tables would be ignored. Unfortunately there is currently no option in EclipseLink to prevent a table from being dropped when using "drop-create". Your best option is to have EclipseLink write the DDL to file and remove the lines for tables you do not want altered. It is likely that something similar will be available in a future version of EclipseLink. You can monitor and provide feedback on the currently active "extensions" feature in EclipseLink : http://wiki.eclipse.org/EclipseLink/Development/2.4.0 . Monitor this page for more information.
Related
If I have a schema_search_path set and I wish to create a bunch of tables using a common script by setting the schema and not explicating the
schema in the table create (common script could be used in multiple schemas, this also sets the schema_search_path to just the specified schema.
This seems like an undesirable side affect.
Value set by SET SCHEMA_SEARCH_PATH is not affected by any other commands.
But this value is only used when an identified is not qualified with the schema and an object with this name doesn't exist in the current schema (affected by SET SCHEMA command).
For example, tables referenced by non-qualified names are searched in the following order:
Tables of the current schema.
Local temporary tables. (Currently they also include query aliases from the WITH clauses, but this may be changed when somebody will implement a separate scope of identifiers for these views.)
Tables of each schema from SCHEMA_SEARCH_PATH, if any. When multiple schemas are specified, they order has a meaning, they are processed in the same order.
Legacy or compatibility tables, such as DUAL or SYSDUMMY1 in DB2 and Derby compatibility modes.
The first table matched by its name will be used.
This is a complex case, for the most types of database objects only steps (1) and (3) are performed.
If you think that something is not going as described here and you can create a standalone test case (Java / JDBC / SQL only, no third-party libraries), you can create a bug report on GitHub:
https://github.com/h2database/h2database/issues
I have few tables in my database where the primary keys are auto generated using Hibernate seqhilo generator configuration. We need to archive these records and at a later point, should be able to restore them in case of a business scenario. My question is if I restore these tables with simple insert statements will that suffice or should I worry about the sequence generator? I would like to have the same ID and not a new generated one. To be clear these re-inserts will happen via direct SQL and not via Hibernate.
This is known to us that all DML statement has been supported by Oracle Regular Table but not the same for External Table? I tried below :
SQL> INSERT INTO xtern_empl_rpt VALUES ('70','Rakshit','Nantu','4587966214','na
tu.rakshit#ge.com','55');
INSERT INTO xtern_empl_rpt VALUES ('70','Rakshit','Nantu','4587966214','natu.ra
kshit#ge.com','55')
*
ERROR at line 1:
ORA-30657: operation not supported on external organized table
SQL> update xtern_empl_rpt set FIRST_NAME='Arup' where SSN='896743856';
update xtern_empl_rpt set FIRST_NAME='Arup' where SSN='896743856'
*
ERROR at line 1:
ORA-30657: operation not supported on external organized table
SQL>
So it seems External table not support this. But my question is - what the logical reason behind this design?
There is no mechanism in Oracle for locking rows in external tables, none of the concurrency controls which we get with regular heap tables. So updating is not allowed.
External tables created with the Oracle Loader driver are read only; the Datapump driver allows us to write to external table files but only in an CTAS mode.
The problem is that eternal tables are basically windows on OS files, without the layer of abstraction and control that internal tables offer. Basically, there is no way for the database to lock a record in an OS file, because the notion of a "record" is a databse thang, not an OS file thang.
External tables are designed for only one thing: data loading and unloading. They are simply not meant to be used with normal DML, and they're not really meant for normal selects either - that works, but if you need to do a lot of selections on an external table, you're "doing it wrong": load the data into proper tables, calculate statistics & add indexes as necessary.
Having external tables behave like normal tables would need that all the transactional machinery be implemented for them, which is very complex, and not worth it since that's not what they are meant for.
If you need normal tables and want to transplant them from one Oracle database to another, you should evaluate using transportable tablespaces too.
Limitations of external table are an obvious consequence of their being read-only; they are an adapter to involve in SQL queries either arbitrary record-organized files (ORACLE_LOADER type) or exported copies of tables in another database (ORACLE_DATAPUMP type).
As already mentioned, external tables are only good for full table scan queries; if one needs to use indexes in heavy duty queries or to modify foreign data sets that have been imported from files, regular tables can be populated using the SQL Loader tool.
I am needing to export the tables for a given schema, into DDL scripts and Insert statements - and have it scripted such that, the order of dependencies/constraints is maintained.
I came across this article suggesting how to archive the database with data - http://www.dba-oracle.com/t_archiving_data_in_file_structures.htm - not sure if the article is applicable for oracle 10g/11g.
I have seen "export table with data" features in "Sql Developer", "Toad for Oracle", "DreamCoder for Oracle" etc, but i would need to do this one table at a time, and will still need to figure out the right order of script execution manually.
Are there any tools/scripts that can utilize oracle metadata and generate DDL script with data?
Note that some of the tables have CLOB datatype columns - so the tool/script would need to be able to handle these columns.
P.S. I am needing something similar to the "Generate Scripts" feature in SQL Server 2008, where one can specify "script data" option and get back a self-sufficient script with DDL and data, generated in the order of table constraints. Please see: http://www.kodyaz.com/articles/sql-server-script-data-with-generate-script-wizard.aspx
Thanks for your help!
Firstly, recognise that this isn't necessarily possible. A view can use a function in a package that also selects from the view. Another issue is that you might need to load data into tables and then apply constraints, even though this might be slower than the other way round.
In short, you will need to do some work here.
Work out the dependencies in your system. ALL_DEPENDENCIES is the primary mechanism.
Then use DBMS_METADATA.GET_DDL to extract the DDL statements. For small data volumes, I'd extract the constraints separately for applying after the data load.
In current versions you can create external tables to unload data from regular tables into OS files (and obviously go the other way round). But if you've got exotic datatypes (BLOB, RAW, XMLTYPEs, User Defined Types....) it will be more challenging.
I suggest that you use Oracle standard export and import (exp/imp) here, is there a reason why you won't consider it? Note in addition you can use the "indexfile" option on the import to output the SQL statements (unfortunately this doesn't include the inserts) to a file instead of actually executing them.
In Informix, I can do a select from the systables table, and can investigate its version column to see what numeric version a given table has. This column is incremented with every DDL statement that affects the given table. This means I have the ability to see whether a table's structure has changed since the last time I connected.
Is there a similar way to do this in Oracle?
Not really. The Oracle DBA/ALL/USER_OBJECTS view has a LAST_DDL_TIME column, but it is affected by operations other than structure changes.
You can do that (and more) with a DDL trigger that keeps track of changes to tables. There's an interesting article with example here.
If you really want to do so, you'd have to use Oracle's auditing functions to audit the changes. It could be as simple as:
AUDIT ALTER TABLE WHENEVER SUCCESSFUL on [schema I care about];
That would at least capture the successfuly changes, ignoring drops and creates. Unfortunately, unwinding the stack of the table's historical strucuture by mining the audit trail is left as an exercise to the reader in Oracle, or to licensing the Change Management Pack.
You could also roll your own auditing by writing system-event triggers which are invoked on DDL statements. You'd end up having to write your own SQL parser if you really wantedto see what was changing.