I'm working on migrating all the tables in my schema to be partitioned tables. Since I'm on 11g instead of 12.2 I have to use DBMS_REDEFINITION. I've automated this with some Ruby code that executes the redefinition procedures for each table in my schema.
My issue is that after migrating, all of my non-nullable columns show NULLABLE - YES. This isn't actually true however, as I can check the constraints and there are check constraints defined for each of the columns that used to be NOT NULL, still enforcing the NOT NULL status. It seems like DBMS_REDEFINITION successfully copied the constraints, but it didn't reflect the NULLABLE status.
Is there a way to tell Oracle to get this back in sync?
I found the answer elsewhere:
https://dba.stackexchange.com/questions/55081/why-is-sqlplus-desc-table-not-showing-not-null-constraints-after-using-dbms-re
Oracle copies the constraints but never turns validate back on. You have to manually do that with:
ALTER TABLE <table_name> ENABLE VALIDATE CONSTRAINT <constraint_name>;
Related
I apologize if I repeat the question, but I did not find a similar one.
I have added a unique constraint on an already existent table. We use MariaDB.
I have used the annotation:
#Table(uniqueConstraints={#UniqueConstraint(name="autonomy_name_energyType", columnNames={"autonomy","name","energyType"})})
The unit tests pass, but in the DB I am still allowed to create duplicates.
Do I need an ALTER table too? By checking the table I can see there are no constraints added to it.
Thanks
As explained in these SO posts :
Unique constraint not created in JPA
#Column(unique=true) does not seem to work
An explicit alter table query is needed for ur constaints to take effect on the db level.
As an extra info, it would have worked if the table was being re-created via JPA. see :
Add a unique constraint over muliple reference columns
My Oracle DB version is 12.1.0.2.0.
I'm having a hard time removing the column identity. Also tried to drop the column and dropping the table with purge command, but every time I'm getting the same Oracle error:
ORA-00600: internal error code, arguments: [12811], [96650], [], [], [], [], [], [], [], [], [], []
Just can't touch the identity column. I tried below commands but no luck:
ALTER TABLE DYS_CATEGORY MODIFY CATEGORY_ID DROP IDENTITY;
ALTER TABLE DYS_CATEGORY DROP COLUMN CATEGORY_ID;
DROP TABLE DYS_CATEGORY PURGE;
I can drop any other column from the table, but the problem is with identity column.
Identity columns are new to Oracle, just introduced in 12c.
This is a problem with Oracle 12.1.0.2.0. At least one other person has reported it (on Windows, which may be relevant).
The error you have is an ORA-00600, which is Oracle's default message for unhandled exceptions i.e. Oracle bugs. The correct answer is to raise a Service Request with Oracle Support; they will be able to provide you with a patch or a workaround if you have a corrupted table you need to fix. If you don't have a Support contract you may be out of luck.
For future reference dropping identity columns is a two-stage process:
alter table t42 modify id drop identity;
alter table t42 drop column id;
As it happens, this is not a problem on the very latest version of the product. In Oracle 18c we can just drop the column without modifying it first. LiveSQL demo.
As William said above it looks like there is/was a system generated the sequence for the identity column that was deleted but the record in idnseq$ remains intact.
I would not recommend this to anyone, but I created a new sequence called junk in the same schema. I then found the object_id for the table and the sequence I created and updated idnseq$ manually changing the seqobj# to the object_id of my new sequence for the object# of the table in question.
I was then able to drop the table and purge the recyclebin successfully.
Really don't recommend hacking oracle system tables, but this was a test system that didn't really matter and it worked.
after lots of search and hard work if the table stil showing error ORA-00600: internal error code, arguments:
Do the below step.
Take a backup of original tables
syntax: Create table original_table_back as select * from original_table;
Rename original table to some new table name
syntax: Rename original_table to original_table_1;
Rename backup to the original table
syntax: Rename Original_table_back to original_table.
I have two environments for my application and in one environment I am using DB2 and Oracle in the other one.
I am using some existing SQL as it's an old application, to drop a table with cascade effect existing SQL is like - DROP TABLE xyz CASCADE CONSTRAINTS;
Above SQL is for Oracle and now I want to write similar SQL for DB2, what I can use in place of CASCADE CONSTRAINTS?
Depends on the Db2-platform and Db2-version...
If your Db2-server runs on Linux/Unix/Windows then there is no syntax equivalent to 'cascade constraints', and if the table being dropped is referenced in an RI constraint either as parent or dependent then the RI constraint gets dropped.
If your Db2-server runs on i-series, then Db2 also has DROP TABLE ... CASCADE (which seems to be functionally equivalent to Oracle cascade-constraints clause), in addition to DROP TABLE ...RESTRICT.
If your Db2-server runs on Z/OS, the there is no syntax equivalent, but as with Db2-LUW, any RI constraints in which the table is parent or dependent get dropped.
When adding a column to a table that has a default value and a constraint of not null. Is it better to run as a single statement or to break it into steps while the database is under load.
ALTER TABLE user ADD country VARCHAR2(4) DEFAULT 'GB' NOT NULL
VERSUS
ALTER TABLE user ADD country VARCHAR2(2)
UPDATE user SET country = 'GB'
COMMIT
ALTER TABLE user MODIFY country DEFAULT 'GB' NOT NULL
Performance depends on the Oracle version you use. Locks are generated anyway.
If version <= Oracle 11.1 then #1 does the same as #2. It is slow anyway.
Beginning with Oracle 11.2, Oracle introduced a great optimization for the first statement (one command doing it all). You don't need to change the command - Oracle just behaves differently. It stores the default value only in data dictionary instead of updating each physical row.
But I also have to say, that I encountered some bugs in the past related to this feature (in Oracle 11.2.0.1)
failure of traditional import if export was done with direct=Y
merge statement can throw an ORA-600 [13013] (internal oracle error)
a performance problem in queries using such tables
I think this issues are fixed in current version 11.2.0.3, so I can recommend to use this feature.
Some time ago we have evaluated possible solutions of the same problem. On our project we had to remove all indexes on table, perform altering and restore indexes back.
If your system needs to be using the table then DBMS_Redefinition is really your only choice.
I've got a program that periodically updates its database schema. Sometimes, one of the DDL statements might fail and if it does, I want to roll back all the changes. I wrap the update in a transaction like so:
BEGIN TRAN;
CREATE TABLE A (PKey int NOT NULL IDENTITY, NewFieldKey int NULL, CONSTRAINT PK_A PRIMARY KEY (PKey));
CREATE INDEX A_2 ON A (NewFieldKey);
CREATE TABLE B (PKey int NOT NULL IDENTITY, CONSTRAINT PK_B PRIMARY KEY (PKey));
ALTER TABLE A ADD CONSTRAINT FK_B_A FOREIGN KEY (NewFieldKey) REFERENCES B (PKey);
COMMIT TRAN;
As we're executing, if one of the statements fail, I do a ROLLBACK instead of a COMMIT. This works great on SQL Server, but doesn't have the desired effect on Oracle. Oracle seems to do an implicit COMMIT after each DDL statement:
http://www.orafaq.com/wiki/SQL_FAQ#What_are_the_difference_between_DDL.2C_DML_and_DCL_commands.3F
http://infolab.stanford.edu/~ullman/fcdb/oracle/or-nonstandard.html#transactions
Is there any way to turn off this implicit commit?
You can not turn this off. Fairly easy to work around by designing your scripts to drop tables in the event they already exist etc...
You can look at using FLASHBACK database, I believe you can do this at the schema/object level but check the docs to confirm that. You would need to be on 10G for that to work.