I am facing a problem please help
I written a trigger to insert data from testing table to main table
doubt is that if i insert data into that testing table should trigger locks the testing table because its in process of inserting into main table.
the main thing to clarify is that when multiple inserting takes place in testing table at a time does it allow to insert new records.
for example each inserting to main table takes 2-3 sec
for each second if it got more than 2 inserting how it works
thanks in advance.
Locking means that you faced some use case where parallel work of several users leads to data inconsistency or other bad things and you're inventing locking to prevent it. Do you see such use case in your task? If so, think about locking.
Related
I want to keep logs of all tables into 1 single log table. Suppose if any DML operation is going on any table inside DB. Than that should be logged in 1 single tables.
But there should be a dynamic trigger which will not hard coded the column names for every table.
Is there any solution for this.
Regards,
Somdutt Harihar
"Is there any solution for this"
No. This is not how databases work. Strongly enforced data structures is what they do, and that applies to audit tables just as much as transaction tables.
The reason is quite clear: the time you save not writing audit code specific to each transactional table is the time you will spend writing a query to retrieve the audit records. The difference is, when you're trying to get the audit records out you will have your boss standing over your shoulder demanding to know when you can tell them what happened to the payroll records last month. Or asking how long it will take you to produce that report for the regulators, are you trying to make the company look like a bunch of clowns? You get the picture. This is not where you want to be.
Also, the performance of a single table to store all the changes to all the tables in the database? That is going to be so slow, you have no idea.
The point is, we can generate the auditing code. It is easy to write some SQL which interrogates the data dictionary and produces DDL for the target tables and triggers to populate those tables.
In fact it gets even easier in 11.2.0.4 and later because we can use FLASHBACK DATA ARCHIVE (formerly Oracle Total Recall) to build and maintain such journalling functionality automatically, and query it automatically with the as of syntax. Find out more.
Okay, so technically there is a solution. You could have a trigger on each table which executes some dynamic PL/SQL to interrogate the data dictionary and assembles a piece of JSON which you stuff into your single table. The single table could be partitioned by day range and sub-partitioned by table name (assuming you have licensed the Partitioning option) to mitigate the performance of querying it.
But that is extremely complex. Running dynamic PL/SQL for every DML statement will have a bad effect on performance, which the users will notice. And this still doesn't solve the fundamental problem of retrieving the audit trail when you need it.
To audit DML actions on any table just enable such audit by using following code:
audit insert table, update table, delete table;
All actions with tables will then be logged to sys.dba_audit_object table.
Audit will only log timestamp, user, host and other params, not exact copies of new or old rows.
Problem: I have a table to which a customer may add columns. This table might have hundreds of columns of varying data types depending on how insane the customer is. I need to deploy an AFTER UPDATE trigger against this table to insert a row in another table for each column value that has changed.
Example:
Table_A, Row 1: Key_Value=1, Col1=123, Col2="foo"...Coln="bar"
becomes
Table_B, Row 1: Key_Value=1, ColName="Col1", ColValue=123
Table_B, Row 2: Key_Value=1, ColName="Col2", ColValue="foo"
Table_B, Row 3: Key_Value=1, ColName="Coln", ColValue="bar"
Since I do not know what columns they may create and this trigger must be deployed with the application, I need to evaluate the OLD vs NEW pseudo records dynamically (if :new.columns[1] != :old.columns[1] then...) to see what has changed and log only the changed columns. The only examples I have been able to find require referencing the columns in the pseudo records explicitly (if :new.col1 != :old.col1 then...).
Question: Is there a way to do this in Oracle?
Caveats: No, this is not for auditing purposes, so I cannot use Oracle's built-in auditing. No, we are not going to rewrite our app because you know how to do it better, this is the way it needs to work for better or worse.
Any helpful comments are welcome. All snarkey DBA drivel is not. Thanks in advance.
No. You can't dynamically reference columns in the :new or :old pseudorecord.
The closest you're likely to come is to write code that dynamically generates the entire trigger body by querying the data dictionary and making static references to columns in the pseudorecord. That code, however, would need to be run every time a column was added or removed from the table. Normally, that would be done as part of normal release management. If you are saying that people are adding and removing columns from this table without going through a release process, you could write a DDL trigger that submitted a job via dbms_job that called the procedure that rebuilt the trigger. That would be a lot of moving pieces and it would be a pain to troubleshoot when something inevitably goes wrong but if you're not open to alternate ways of implementing the functionality, that's complexity you'll have to live with.
I have two tables tabA and tabB, that are identical.
I want to create a mechanism that for each time a new row is inserted into
tabA the row shall also "automatically" be inserted into rowB. If rows are deleted
in tabA, nothing shall happen in tabB.
I have used insert triggers for this but have had some problems. I have also got some
comments that triggers should not be used for this.
So, what should I use? Materialized views demands that tabA and tabB are the same.
In a perfect world one would strive to isolate table changes to a single program unit in order to uniformly apply business logic requirements such as this without impact to existing code that is driving the inserts. That said, many times this is not the best solution in practice, due to scattered inserts throughout an application. In that case, while not optimal, an insert trigger could easily be seen as the most pragmatic solution.
You could write a stored procedure that accepts the column values as parameters, and then applies an INSERT to both tabA and tabB.
I have a trigger that checks another couple of tables before allowing a row to be inserted. However between the time I check the other tables and insert the row the other tables may get updated.
How do I ensure the tables I'm checking remain in a consistent state until after the new row is inserted? I was thinking of taking locks out but everything I've read boils down to if you are not leaving locking to Oracle you're almost certainly doing it wrong.
Oracle is already doing this for you, when you perform a select it will look at all tables as of the time the transaction started ( the time of the first DML ). This wont stop the data from being changed under you though, your transaction just wont see it being changed. If you want to stop that data from being changed then you can use "SELECT FOR UPDATE" as Justin Cave suggests.
I would seriously question what you are doing though, triggers, except in the most trivial cases, almost always lead to unexpected side effects.
I am writing a data conversion in PL/SQL that processes data and loads it into a table. According to the PL/SQL Profiler, one of the slowest parts of the conversion is the actual insert into the target table. The table has a single index.
To prepare the data for load, I populate a variable using the rowtype of the table, then insert it into the table like this:
insert into mytable values r_myRow;
It seems that I could gain performance by doing the following:
Turn logging off during the insert
Insert multiple records at once
Are these methods advisable? If so, what is the syntax?
It's much better to insert a few hundred rows at a time, using PL/SQL tables and FORALL to bind into insert statement. For details on this see here.
Also be careful with how you construct the PL/SQL tables. If at all possible, prefer to instead do all your transforms directly in SQL using "INSERT INTO t1 SELECT ..." as doing row-by-row operations in PL/SQL will still be slower than SQL.
In either case, you can also use direct-path inserts by using INSERT /*+APPEND*/, which basically bypasses the DB cache and directly allocates and writes new blocks to data files. This can also reduce the amount of logging, depending on how you use it. This also has some implications, so please read the fine manual first.
Finally, if you are truncating and rebuilding the table it may be worthwhile to first drop (or mark unusable) and later rebuild indexes.
Regular insert statements are the slowest way to get data in a table and not meant for bulk inserts. The following article references a lot of different techniques for improving performance: http://www.dba-oracle.com/oracle_tips_data_load.htm
Drop the index, then insert the rows, then re-create the index.
If dropping the index doesn't speed things up enough, you need the Oracle SQL*Loader:
http://www.oracle.com/technology/products/database/utilities/htdocs/sql_loader_overview.html
Suppose you have taken eid,ename,sal,job. So create a table first as:
SQL>create table tablename(eid number, ename varchar2(20),sal number,job char(10));
Now insert data:-
SQL>insert into tablename values(&eid,'&ename',&sal,'&job');
Check this link
http://www.dba-oracle.com/t_optimize_insert_sql_performance.htm
main points to consider for your
case is to use Append hint as this
will directly append into the table
instead of using freelist. If you can afford to turn off logging than use append with nologging hint to do it
Use a bulk insert instead instead of iterating in PL/SQL
Use sqlloaded to load the data directly into the table if you are getting data from a file feed
Here are my recommendations on fast insert.
Trigger - Disable any triggers associated with a table. Enable after Inserts are complete.
Index - Drop Index and re-create it after your Inserts are complete.
Stale stats - Re-analyze table and index stats.
Index de-fragmentation - Rebuild Index if needed
Use No Logging -Insert using INSERT APPEND (Oracle only). This approach is very risky approach, no redo logs are generated therefore you can’t do a rollback - make a backup of table before you start and don't try on live tables. Check if your db has similar option
Parallel Insert: Running parallel insert will get the job faster.
Use Bulk Insert
Constraints - Not much overhead during inserts but still a good idea to check, if it is still slow after even after step 1
You can learn more on http://www.dbarepublic.com/2014/04/slow-insert.html
Maybe one of your best option is to avoid Oracle as much as possible actually.
I've been baffled by this myself, but very often a Java process can outperform many of the Oracle's utilities which either use OCI (read: SQL Plus) or will take up so much of your time to get right (read: SQL*Loader).
This doesn't prevent you to use specific hints either (like /APPEND/).
I've been pleasantly surprised each time I've turned to that kind of solution.
Cheers,
Rollo