I have two environments for my application and in one environment I am using DB2 and Oracle in the other one.
I am using some existing SQL as it's an old application, to drop a table with cascade effect existing SQL is like - DROP TABLE xyz CASCADE CONSTRAINTS;
Above SQL is for Oracle and now I want to write similar SQL for DB2, what I can use in place of CASCADE CONSTRAINTS?
Depends on the Db2-platform and Db2-version...
If your Db2-server runs on Linux/Unix/Windows then there is no syntax equivalent to 'cascade constraints', and if the table being dropped is referenced in an RI constraint either as parent or dependent then the RI constraint gets dropped.
If your Db2-server runs on i-series, then Db2 also has DROP TABLE ... CASCADE (which seems to be functionally equivalent to Oracle cascade-constraints clause), in addition to DROP TABLE ...RESTRICT.
If your Db2-server runs on Z/OS, the there is no syntax equivalent, but as with Db2-LUW, any RI constraints in which the table is parent or dependent get dropped.
Related
I'm working on migrating all the tables in my schema to be partitioned tables. Since I'm on 11g instead of 12.2 I have to use DBMS_REDEFINITION. I've automated this with some Ruby code that executes the redefinition procedures for each table in my schema.
My issue is that after migrating, all of my non-nullable columns show NULLABLE - YES. This isn't actually true however, as I can check the constraints and there are check constraints defined for each of the columns that used to be NOT NULL, still enforcing the NOT NULL status. It seems like DBMS_REDEFINITION successfully copied the constraints, but it didn't reflect the NULLABLE status.
Is there a way to tell Oracle to get this back in sync?
I found the answer elsewhere:
https://dba.stackexchange.com/questions/55081/why-is-sqlplus-desc-table-not-showing-not-null-constraints-after-using-dbms-re
Oracle copies the constraints but never turns validate back on. You have to manually do that with:
ALTER TABLE <table_name> ENABLE VALIDATE CONSTRAINT <constraint_name>;
My Oracle DB version is 12.1.0.2.0.
I'm having a hard time removing the column identity. Also tried to drop the column and dropping the table with purge command, but every time I'm getting the same Oracle error:
ORA-00600: internal error code, arguments: [12811], [96650], [], [], [], [], [], [], [], [], [], []
Just can't touch the identity column. I tried below commands but no luck:
ALTER TABLE DYS_CATEGORY MODIFY CATEGORY_ID DROP IDENTITY;
ALTER TABLE DYS_CATEGORY DROP COLUMN CATEGORY_ID;
DROP TABLE DYS_CATEGORY PURGE;
I can drop any other column from the table, but the problem is with identity column.
Identity columns are new to Oracle, just introduced in 12c.
This is a problem with Oracle 12.1.0.2.0. At least one other person has reported it (on Windows, which may be relevant).
The error you have is an ORA-00600, which is Oracle's default message for unhandled exceptions i.e. Oracle bugs. The correct answer is to raise a Service Request with Oracle Support; they will be able to provide you with a patch or a workaround if you have a corrupted table you need to fix. If you don't have a Support contract you may be out of luck.
For future reference dropping identity columns is a two-stage process:
alter table t42 modify id drop identity;
alter table t42 drop column id;
As it happens, this is not a problem on the very latest version of the product. In Oracle 18c we can just drop the column without modifying it first. LiveSQL demo.
As William said above it looks like there is/was a system generated the sequence for the identity column that was deleted but the record in idnseq$ remains intact.
I would not recommend this to anyone, but I created a new sequence called junk in the same schema. I then found the object_id for the table and the sequence I created and updated idnseq$ manually changing the seqobj# to the object_id of my new sequence for the object# of the table in question.
I was then able to drop the table and purge the recyclebin successfully.
Really don't recommend hacking oracle system tables, but this was a test system that didn't really matter and it worked.
after lots of search and hard work if the table stil showing error ORA-00600: internal error code, arguments:
Do the below step.
Take a backup of original tables
syntax: Create table original_table_back as select * from original_table;
Rename original table to some new table name
syntax: Rename original_table to original_table_1;
Rename backup to the original table
syntax: Rename Original_table_back to original_table.
One of my classes using APEX Oracle had a couple of tables named with periods that I added to Apex successfully but now I'm unable to delete them.
The tables are named like
classid.groupid_table_name
I've tried going through the UI trying to find a way to manually drop the tables and have also run the scripts:
drop table classid.groupid_table_name cascade constraints;
which gets the error "ORA-00942: table or view does not exist"
and
drop table [classid.groupid_table_name] cascade constraints;
which gets the error "ORA-00903: invalid table name"
The tables aren't really doing anything bad they're just kind of cluttering up the workspace since the naming scheme has since been changed to classid_groupid_table_name.
Have you tried, the following:
drop table "classid.groupid_table_name" cascade constraints;
I have a need to move data between two identical Oracle databases. I have figured out how to use dbLinks to achieve most of it. Here is my confusion.
Lets say I have Table A, which refers to Table B present in DB1 and also similar structure in DB2. Is there any way possible for me to create db link to move data between Table A in DB1 and DB2 which automatically copies the relevant data in Table B to support referential constraints (without me having to spell it out)?
Thanks
Kay
A simple approach would be to duplicate the foreign key and check constraints in DB2.TableB in the destination table DB1.TableA.
A little more work is to create a materialized view in DB1 along the lines of
Create Materialized View TableA as Select * from TableB#DB2.link;
Refresh as required... You cannot do a fast refresh on a remote database but very few applications require true real time synchronization.
I've got a program that periodically updates its database schema. Sometimes, one of the DDL statements might fail and if it does, I want to roll back all the changes. I wrap the update in a transaction like so:
BEGIN TRAN;
CREATE TABLE A (PKey int NOT NULL IDENTITY, NewFieldKey int NULL, CONSTRAINT PK_A PRIMARY KEY (PKey));
CREATE INDEX A_2 ON A (NewFieldKey);
CREATE TABLE B (PKey int NOT NULL IDENTITY, CONSTRAINT PK_B PRIMARY KEY (PKey));
ALTER TABLE A ADD CONSTRAINT FK_B_A FOREIGN KEY (NewFieldKey) REFERENCES B (PKey);
COMMIT TRAN;
As we're executing, if one of the statements fail, I do a ROLLBACK instead of a COMMIT. This works great on SQL Server, but doesn't have the desired effect on Oracle. Oracle seems to do an implicit COMMIT after each DDL statement:
http://www.orafaq.com/wiki/SQL_FAQ#What_are_the_difference_between_DDL.2C_DML_and_DCL_commands.3F
http://infolab.stanford.edu/~ullman/fcdb/oracle/or-nonstandard.html#transactions
Is there any way to turn off this implicit commit?
You can not turn this off. Fairly easy to work around by designing your scripts to drop tables in the event they already exist etc...
You can look at using FLASHBACK database, I believe you can do this at the schema/object level but check the docs to confirm that. You would need to be on 10G for that to work.