I have one table named test which has 3 columns:
Name
id
address
After some time I know one column is not in use. I want to drop one column, let's say id .
Oracle has one feature to identify a column as unused. What is differenc between drop column vs set unused column?
When you drop a column it moves into recycle bin while when you mark a column unused it is like logically dropping it but physically preserving it.
Sometimes marking a column as unused and then using the alter table name drop unused column statement is useful because it allows the DBA to take away column access quickly and immediately. Later on, during a routine database maintenance weekend or after business hours, you can then remove the column with the alter table name drop unused column to reclaim the space.
On the other hand, marking the column unused won't free up any space and when there is an need to free up space and remove the columns that are not needed you would be better off dropping it.
It's a matter of convenience, actually...
see here
setting to "unused" is just like dropping, but will allow you to defer the actual physical deletion to a later date.
This is a help to DBA's maintenance window not for your use in general. For developer it means DROP column. that's it, no recovery later point. instead if you are in 12 c then use INVISIBLE option.
Related
for the archiving purpose i need to delete the data from a table but as delete will not free up space a lot of data is occupied .To save this i have figured out a solution to use shrink space command provided by oracle.
But this requires row movement to be enabled.
So my question are below:
1. is it a good idea to enable row movement for using shrink space command.
2. Can we just enable the row movement for running shrink space command and then disable it again
3. or we should leave row movement enabled and run shrink space command as and when required(say once a week).
Are you deleting all of the data on the table or just some part of it? If you are deleting all of it you could just truncate it, freeing up all of the space allocated for the tables and indexes in a very fast way:
truncate table t;
If you are not deleting of all it, the row movement approach should be ok (any of the 3 options) but you would have to test concurrent access to this table. Is there a chance someone else would try to update/insert this table at the same time of your maintenance? My guess is that this could be a problem.
So another approach could be partition the table based on your purge criteria. For example, if you will erase the data older than 3 months, you could have the table partitioned by month and delete only the partitions you wont't need anymore. This is in fact what partitions were made for, to easy maintenance of data.
I need to add functionality to the existing mib and agent code to delete all the rows of a table (i.e. clear the contents of a table).
I know how to delete a single row (using the rowstatus "destroy"), but how to delete all the rows in one go?
As You already know, the deletion of a row is a side-effect of changing the rowstatus column. You could use a special object next to the table which would clear the table itself.
NOTE: One of the rules of SNMP is avoiding redundancy. The SNMP manager can already delete the whole table content by deleting the rows one by one.
is there any chance to drop a column in a compressed table?
I checked google and it seems like its not possible at all.
to get sure im asking here.
regards
set that column to unused:
ALTER TABLE TEST SET UNUSED (column name);
ALTER TABLE TEST DROP unused columns;
Note: This statement does not actually remove the target column data or restore the disk space occupied by these columns. However, a column that is marked as unused is not displayed in queries or data dictionary views, and its name is removed so that a new column can reuse that name. All constraints, indexes, and statistics defined on the column are also removed.
If that does not work for you for some reason, you can try to move the table into a non-compressed format and then drop the column and compress again.
I have two procedures that I want to run on the same table, one uses the birth date and the other updates the name and last name taken from a third table.
The one that uses the birthday to update the age field runs all over the table, and the one that updates the names and last name only updates the rows that appear on the third table based on a key.
So I launched both and got deadlocked! Is there a way to prioritize any of them? I read about the nowait and skip locked for the update but then, how would I return to the ones skipped?
Hope you can help me on this!!
One possibility is to lock all rows you will update at once. Doing all updates in a single update statment will accomplish this. Or
select whatever from T
where ...
for update;
Another solution is to create what I call a "Gatekeeper" table. Both procedures need to lock the Gatekeeper table in exclusive mode before updating the table in question. The second procedure will block until the first commits but won't deadlock. In 11g you can create a table with no space allocated.
A variation is to insert a row in the Gatekeeper. Then lock only that row with select for update. Then you can use the Gatekeeper in other situations.
I would guess that you got locked because the update for all the rows and the update for a small set of rows accessed rows in different orders.
The former used a full scan and reached Raw A first, then went on to other rows, eventually trying to lock Row B. However, the other query was driven from an index or a join and already had Row B locked, and was off to lock Row A when it found it was already locked.
So, the fix: firstly, having an age column that needs to be constantly modified is a really bad idea. Perhaps it was done to allow indexing of age, but with a correctly written query an index on date of birth will let you find the same records just as quickly. You've broken normalisation rules and ended up coding yourself a deadlocking application. Hopefully you are only updating the rows that need to be updated, not all of them regardless -- I mean, that would just be insane.
The best solution is to get rid of that design flaw.
The not so good solution is to deconflict your queries by running them at different times or by using DBMS_Lock so that only one of them can run at any time.
In Oracle 10g, does it matter what order create index and alter table comes in?
Say i have a query Q with a where clause on column C in table T. Now i perform one of the following scenarios:
I create index I(C) and then add columns X,Y,Z.
Add columns X,Y,Z then create index I(C).
Q is 'select * from T where C = whatever'
Between 1 and 2 will there be a significant difference in performance of Q on table T when T contains a very large number of rows?
I personally make it a practice to do #2 but others seem to have a different opinion.
thanks
It makes no difference if you add columns to a table before or after creating an index. The optimizer should pick the same plan for the query and the execution time should be unchanged.
Depending on the physical storage parameters of the table, it is possible that adding the additional columns and populating them with data may force quite a bit of row migration to take place. That row migration will generate changes to the indexes on the table. If the index exists when you are populating the three new columns with data, it is possible that populating the data in X, Y, and Z will take a bit longer because of the additional index maintenance.
If you add columns without populating them, then it is pretty quick as it is just a metadata change. Adding an index does require the table to be read (or potentially another index) so that can be very time consuming and of much greater impact than the simple metadata change of recording the new index details.
If the new columns are going to be populated as part of the ALTER TABLE, it is a different matter.
The database may undergo an unplanned shutdown during the course of adding that data to every row of the table data
The server memory may not have room to record every row changed in that table
Therefore those row changes may be written to datafiles before commit, and are therefore written as dirty blocks
The next read of those blocks, after the ALTER table has successfully completed will do a delayed block cleanout (ie record the fact that the change has been committed)
If you add the columns (with data) first, then the create index will (probably) read the table and do the added work of the delayed block cleanout.
If you create the index first then add the columns, the create index may be faster but the delayed block cleanout won't happen and that housekeeping will be picked up by the application later (potentially by the select * from T where C = whatever)