I need to add functionality to the existing mib and agent code to delete all the rows of a table (i.e. clear the contents of a table).
I know how to delete a single row (using the rowstatus "destroy"), but how to delete all the rows in one go?
As You already know, the deletion of a row is a side-effect of changing the rowstatus column. You could use a special object next to the table which would clear the table itself.
NOTE: One of the rules of SNMP is avoiding redundancy. The SNMP manager can already delete the whole table content by deleting the rows one by one.
Related
I need to update the some tables in my application from some other warehouse tables which would be updating weekly or biweekly. I should update my tables based on those. And these are having foreign keys in another tables. So I cannot just truncate the table and reinsert the whole data every time. So I have to take the delta and update accordingly based on few primary key columns which doesn't change. Need some inputs on how to implement this approach.
My approach:
Check the last updated time of those tables, views.
If it is most recent then compare each row based on the primary key in my table and warehouse table.
update each column if it is different.
Do nothing if there is no change in columns.
insert if there is a new record.
My Question:
How do I implement this? Writing a PL/SQL code is it a good and efficient way? as the expected number of records are around 800K.
Please provide any sample code or links.
I would go for Pl/Sql and bulk collect forall method. You can use minus in your cursor in order to reduce data size and calculating difference.
You can check this site for more information about bulk collect, forall and engines: http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html
There are many parts to your question above and I will answer as best I can:
While it is possible to disable referencing foreign keys, truncate the table, repopulate the table with the updated data then reenable the foreign keys, given your requirements described above I don't believe truncating the table each time to be optimal
Yes, in principle PL/SQL is a good way to achieve what you are wanting to
achieve as this is too complex to deal with in native SQL and PL/SQL is an efficient alternative
Conceptually, the approach I would take is something like as follows:
Initial set up:
create a sequence called activity_seq
Add an "activity_id" column of type number to your source tables with a unique constraint
Add a trigger to the source table/s setting activity_id = activity_seq.nextval for each insert / update of a table row
create some kind of master table to hold the "last processed activity id" value
Then bi/weekly:
retrieve the value of "last processed activity id" from the master
table
select all rows in the source table/s having activity_id value > "last processed activity id" value
iterate through the selected source rows and update the target if a match is found based on whatever your match criterion is, or if
no match is found then insert a new row into the target (I assume
there is no delete as you do not mention it)
on completion, update the master table "last processed activity id" to the greatest value of activity_id for the source rows
processed in step 3 above.
(please note that, depending on your environment and the number of rows processed, the above process may need to be split and repeated over a number of transactions)
I hope this proves helpful
I need to update the primary key of a large Index Organized Table (20 million rows) on Oracle 11g.
Is it possible to do this using multiple UPDATE queries? i.e. Many smaller UPDATEs of say 100,000 rows at a time. The problem is that one of these UPDATE batches could temporarily produce a duplicate primary key value (there would be no duplicates after all the UPDATEs have completed.)
So, I guess I'm asking is it somehow possible to temporarily disable the primary key constraint (but which is required for an IOT!) or alter the table temporarily some other way. I can have exclusive and offline access to this table.
The only solution I can see is to create a new table and when complete, drop the original table and rename the new table to the original table name.
Am I missing another possibility?
You can't disable / drop the primary key constraint from an IOT, since it is a unique index by definition.
When I need to change an IOT like this, I either do a CTAS (create table as) for a new plain heap table, do my maintenance, and then CTAS a new IOT.
Something like:
create table t_temp as select * from t_iot;
-- do maintenance
create table t_new_iot as select * from t_temp;
If, however, you need to simply add or join a new field to the existing key, you can do this in one step by creating the new IOT structure, then populating directly from the old IOT with a query.
Unfortunately, this is one of the downsides to IOTs.
I would recommend following method:
Create new IOT table partitioned by system with single partition
with exactly same structure as current one.
Lock current IOT table to prevent any DML.
insert into new table as select from current table changing PK values in select. This step
could be repeated several times if needed. In this case it's better
to do it in another session to keep lock on original table.
Exchange partition of new table with original table.
I made a loginform for my access database but how to prevent anyuser to delete the last record
Ex: if there is two records in login_table or more. The user can delete all the record but not the lastone
There are many ways to do that:
1. Create constraint on your server side code to check whether there is only one record at the time of deleting the records.
2. Create a trigger on the table which prevents the user from deleting the last record.
Probably the easiest way to accomplish this in Access 2013 would be to create a "Before Delete" data macro that looks like this:
If DCount("*","Table1")<2 Then
RaiseError
Error Number 1
Error Description You cannot delete the last remaining record in this table.
End If
To create this data macro, open the table in Design View, then on the "Design" tab of the ribbon click "Create Data Macros" and choose "Before Delete". (Remember to replace "Table1" with the actual name of your table.)
The previous record is saved in the table so it can be deleted. The record being entered is not actually saved in the table until the form is closed or an action is taken to enter another record.
On an entry form I create a duplicate table with the same fields. The entry form places the data temporarily into the first table. Then I created two queries. One to update from the temporary table to the secondary table. The second query clears the first table making it ready for new data entry. The action of the query requires to command entry to save the record prior to running the two queries. I perform this by creating a macro to perform the actions in sequence. 1. save record, 2 copy the data to the second table, 3 clear the first table.
You will have better control over the data.
In Oracle 10g, does it matter what order create index and alter table comes in?
Say i have a query Q with a where clause on column C in table T. Now i perform one of the following scenarios:
I create index I(C) and then add columns X,Y,Z.
Add columns X,Y,Z then create index I(C).
Q is 'select * from T where C = whatever'
Between 1 and 2 will there be a significant difference in performance of Q on table T when T contains a very large number of rows?
I personally make it a practice to do #2 but others seem to have a different opinion.
thanks
It makes no difference if you add columns to a table before or after creating an index. The optimizer should pick the same plan for the query and the execution time should be unchanged.
Depending on the physical storage parameters of the table, it is possible that adding the additional columns and populating them with data may force quite a bit of row migration to take place. That row migration will generate changes to the indexes on the table. If the index exists when you are populating the three new columns with data, it is possible that populating the data in X, Y, and Z will take a bit longer because of the additional index maintenance.
If you add columns without populating them, then it is pretty quick as it is just a metadata change. Adding an index does require the table to be read (or potentially another index) so that can be very time consuming and of much greater impact than the simple metadata change of recording the new index details.
If the new columns are going to be populated as part of the ALTER TABLE, it is a different matter.
The database may undergo an unplanned shutdown during the course of adding that data to every row of the table data
The server memory may not have room to record every row changed in that table
Therefore those row changes may be written to datafiles before commit, and are therefore written as dirty blocks
The next read of those blocks, after the ALTER table has successfully completed will do a delayed block cleanout (ie record the fact that the change has been committed)
If you add the columns (with data) first, then the create index will (probably) read the table and do the added work of the delayed block cleanout.
If you create the index first then add the columns, the create index may be faster but the delayed block cleanout won't happen and that housekeeping will be picked up by the application later (potentially by the select * from T where C = whatever)
I have some large tables (millions of rows). I constantly receive files containing new rows to add in to those tables - up to 50 million rows per day. Around 0.1% of the rows I receive are duplicates of rows I have already loaded (or are duplicates within the files). I would like to prevent those rows being loaded in to the table.
I currently use SQLLoader in order to have sufficient performance to cope with my large data volume. If I take the obvious step and add a unique index on the columns which goven whether or not a row is a duplicate, SQLLoader will start to fail the entire file which contains the duplicate row - whereas I only want to prevent the duplicate row itself being loaded.
I know that in SQL Server and Sybase I can create a unique index with the 'Ignore Duplicates' property and that if I then use BCP the duplicate rows (as defined by that index) will simply not be loaded.
Is there some way to achieve the same effect in Oracle?
I do not want to remove the duplicate rows once they have been loaded - it's important to me that they should never be loaded in the first place.
What do you mean by "duplicate"? If you have a column which defines a unique row you should setup a unique constraint against that column. One typically creates a unique index on this column, which will automatically setup the constraint.
EDIT:
Yes, as commented below you should setup a "bad" file for SQL*Loader to capture invalid rows. But I think that establishing the unique index is probably a good idea from a data-integrity standpoint.
Use Oracle MERGE statement. Some explanations here.
You dint inform about what release of Oracle you have. Have a look at there for merge command.
Basically like this
---- Loop through all the rows from a record temp_emp_rec
MERGE INTO hr.employees e
USING temp_emp_rec t
ON (e.emp_ID = t.emp_ID)
WHEN MATCHED THEN
--- _You can update_
UPDATE
SET first_name = t.first_name,
last_name = t.last_name
--- _Insert into the table_
WHEN NOT MATCHED THEN
INSERT (emp_id, first_name, last_name)
VALUES (t.emp_id, t.first_name, t.last_name);
I would use integrity constraints defined on the appropriate table columns.
This page from the Oracle concepts manual gives an overview, if you also scroll down you will see what types of constraints are available.
use below option, if you will get this much error 9999999 after that your sqlldr will terminate.
OPTIONS (ERRORS=9999999, DIRECT=FALSE )
LOAD DATA
you will get duplicate records in bad file.
sqlldr user/password#schema CONTROL=file.ctl, LOG=file.log, BAD=file.bad