Changing the number precision in Oracle [duplicate] - oracle

This question already has answers here:
Changing precision of numeric column in Oracle
(3 answers)
Closed 9 years ago.
I haven't seen this exact issue... I apologize if I missed it.
I have a database I inherited from the company that created it when they lost the contract to my company. Now, the customer has an "urgent" issue to change the precision of a value. They want to change a number from 10.3 to 10.8; the field is defined in the database as 10,3. In my research of the issue, It seems I have to do quite a bit of manipulation to do this since the table needs to be empty to change the precision. I am assuming that even though the default is 10.38 (or whatever it is), since they defined it as 10,3 upon creation, if I just increase the field in the ColdFusion code, when it saves, it will truncate to 10,3? The change would involve me capturing the current data, deleting the data, changing the precision, then reloading the data, ensuring the existing data has been changed as well. Yes, I know that's not in detail, but hopefully gets the point across. Thank you.

Do you really want to change from a NUMBER(10,3) to a NUMBER(10,8)? That would significantly restrict the range of numbers that could be stored in the field - which is precisely why you can't do it when there is data in the column.
Or do you mean that you want to increase the number of decimal places from 3 to 8, while still allowing the same overall range of values? If so, then I think you want to change NUMBER(10,3) to NUMBER(15,8) - and you should be able to do that using a simple ALTER even if the column contains data.

The easiest way to handle this is to rename the column, copy the data over, then drop the original column:
alter table EVAPP_FEES rename column AMOUNT to AMOUNT_OLD;
alter table EVAPP_FEES add AMOUNT NUMBER(14,2);
update EVAPP_FEES set AMOUNT = AMOUNT_OLD;
alter table EVAPP_FEES drop column AMOUNT_OLD;
OR
alter table EVAPP_FEES add AMOUNT_TEMP NUMBER(14,2);
update EVAPP_FEES set AMOUNT_TEMP = AMOUNT;
update EVAPP_FEES set AMOUNT = null;
alter table EVAPP_FEES modify AMOUNT NUMBER(14,2);
update EVAPP_FEES set AMOUNT = AMOUNT_TEMP;
alter table EVAPP_FEES drop column AMOUNT_TEMP;

Related

Change the datatype of a column in a partitioned table with billion rows

We have a table with 120 partitions on date range which in turn is subpartitioned again on range.
Each partition has around 200 million records, the conventional way of changing the datatype will make our production unresponsive for hours.Is there any better way for changing the datatype of such a huge table?
We have already tried the following options:
Exchange partition. This does not work.
Create a new table with the same structure as the existing one and the altered column, and inserting the data using /*+ append *. It again takes hours.
Currently the column size is varchar2(30). We need to change it to:
ALTER TABLE ORDERS MODIFY (INFO VARCHAR2(50) );
Changing varchar2(30) to varchar2(50) should work instantly and should not cause any trouble.
You modify just some meta data but actual table data is not touched.

Oracle 12c - refreshing the data in my tables based on the data from warehouse tables

I need to update the some tables in my application from some other warehouse tables which would be updating weekly or biweekly. I should update my tables based on those. And these are having foreign keys in another tables. So I cannot just truncate the table and reinsert the whole data every time. So I have to take the delta and update accordingly based on few primary key columns which doesn't change. Need some inputs on how to implement this approach.
My approach:
Check the last updated time of those tables, views.
If it is most recent then compare each row based on the primary key in my table and warehouse table.
update each column if it is different.
Do nothing if there is no change in columns.
insert if there is a new record.
My Question:
How do I implement this? Writing a PL/SQL code is it a good and efficient way? as the expected number of records are around 800K.
Please provide any sample code or links.
I would go for Pl/Sql and bulk collect forall method. You can use minus in your cursor in order to reduce data size and calculating difference.
You can check this site for more information about bulk collect, forall and engines: http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html
There are many parts to your question above and I will answer as best I can:
While it is possible to disable referencing foreign keys, truncate the table, repopulate the table with the updated data then reenable the foreign keys, given your requirements described above I don't believe truncating the table each time to be optimal
Yes, in principle PL/SQL is a good way to achieve what you are wanting to
achieve as this is too complex to deal with in native SQL and PL/SQL is an efficient alternative
Conceptually, the approach I would take is something like as follows:
Initial set up:
create a sequence called activity_seq
Add an "activity_id" column of type number to your source tables with a unique constraint
Add a trigger to the source table/s setting activity_id = activity_seq.nextval for each insert / update of a table row
create some kind of master table to hold the "last processed activity id" value
Then bi/weekly:
retrieve the value of "last processed activity id" from the master
table
select all rows in the source table/s having activity_id value > "last processed activity id" value
iterate through the selected source rows and update the target if a match is found based on whatever your match criterion is, or if
no match is found then insert a new row into the target (I assume
there is no delete as you do not mention it)
on completion, update the master table "last processed activity id" to the greatest value of activity_id for the source rows
processed in step 3 above.
(please note that, depending on your environment and the number of rows processed, the above process may need to be split and repeated over a number of transactions)
I hope this proves helpful

Oracle sequence generator within interval

I'm using oracle 11gr2 and for the product table when a new product is inserted I need to assign an autoincrement id going from 1 to 65535. Product could be then be deleted.
When I reach the 65535th, I need to scan the table to find a free hole for assigning new ID.
As I have this requirement oracle sequence could not be used, so I am using a function (tried also a trigger on insert) in order to generate a free id...
The problem is that I could not handle batch insert for example and I have concurrency problems...
How could I solve this ? By using some sort of external Id generator ?
Sounds like an arbitrary design. Is there a good reason for having a 16-bit max product id or for reusing IDs? Both constraints are bad practice.
I doubt any external generator is going to provide anything that Oracle doesn't already provide. I recommend using sequences for batch insert. The problem you have is how to recycle the IDs. Oracle plain sequences don't track the primary key, so you need a solution to find recycled keys first, then fallback to the sequence perhaps.
Product ID Recycling
Batch Inserts - Use sequence for keys the first time you load them. For this small range, set NOCACHE on the sequence to eliminate gaps.
Deletes - When a product is deleted, instead of actually deleting the row, set a DELETED = 'Y' flag on the row.
Inserts - Update the first record available with DELETED flag set, or either select the min ID from product table where DELETED = 'Y'. Update record with new product info (but same ID) and set DELETED = 'N'
This ensures you always recycle before you insert new sequence IDs
If you want to implement the logic in the database, you can create a view (VIEW$PRODUCTS) where DELETED = 'N' and an INSTEAD OF INSERT trigger to do the insert.
In any scenario, when you run out of sequences (or sequence wraps), you are out of luck for batch inserts. I'd reconsider that part of the design if I were you.

Adding column with default value

I have a table A (3 columns) in production which is around 10 million records. I wanted to add one more column to that table and also I want to make default value to 1. Is it going to impact production DB performance If add a column with default value 1 or something else. What would be best approach to this to avoid any kind of performance impact on DB? your thoughts are much appreciated!!
In Oracle 11g the process of adding a new column with a default value has been considerably optimized. If a newly added column is specified as NOT NULL, default value for that column is maintained in the data dictionary and it's no longer required for a default value of a column to be stored for all records in a table, so it's no longer required to update each record with a default value. Such an optimization considerably reduces amount of time the table is exclusively locked during the operation.
alter table <tab_name> add(<col_name> <data_type> default <def_val> not null)
Moreover, column with a default value added that way will not consume space, until you deliberately start to update that column or insert a record with a non default value for that column. So the operation of adding a new column with a default value and not null constraint specified completes pretty quick.
i think that it is better that you create a table as backup table with this syntax:
create table BackUpTable as SELECT * FROM YourTable;
alter table BackUpTable add (newColumn number(5,0)default 1);

create index before adding columns vs. create index after adding columns - does it matter?

In Oracle 10g, does it matter what order create index and alter table comes in?
Say i have a query Q with a where clause on column C in table T. Now i perform one of the following scenarios:
I create index I(C) and then add columns X,Y,Z.
Add columns X,Y,Z then create index I(C).
Q is 'select * from T where C = whatever'
Between 1 and 2 will there be a significant difference in performance of Q on table T when T contains a very large number of rows?
I personally make it a practice to do #2 but others seem to have a different opinion.
thanks
It makes no difference if you add columns to a table before or after creating an index. The optimizer should pick the same plan for the query and the execution time should be unchanged.
Depending on the physical storage parameters of the table, it is possible that adding the additional columns and populating them with data may force quite a bit of row migration to take place. That row migration will generate changes to the indexes on the table. If the index exists when you are populating the three new columns with data, it is possible that populating the data in X, Y, and Z will take a bit longer because of the additional index maintenance.
If you add columns without populating them, then it is pretty quick as it is just a metadata change. Adding an index does require the table to be read (or potentially another index) so that can be very time consuming and of much greater impact than the simple metadata change of recording the new index details.
If the new columns are going to be populated as part of the ALTER TABLE, it is a different matter.
The database may undergo an unplanned shutdown during the course of adding that data to every row of the table data
The server memory may not have room to record every row changed in that table
Therefore those row changes may be written to datafiles before commit, and are therefore written as dirty blocks
The next read of those blocks, after the ALTER table has successfully completed will do a delayed block cleanout (ie record the fact that the change has been committed)
If you add the columns (with data) first, then the create index will (probably) read the table and do the added work of the delayed block cleanout.
If you create the index first then add the columns, the create index may be faster but the delayed block cleanout won't happen and that housekeeping will be picked up by the application later (potentially by the select * from T where C = whatever)

Resources