A better approach than Oracle trigger - oracle

We're supposed to update some columns in a table 'tab1' with some values(which can be picked up from a different table 'tab2'). Now 'tab1' is getting new records inserted almost every few seconds(from MQ by a different system).
We want to design a solution that will update 'tab1' as soon as there is a new record added to 'tab1'. It doesn't have to be done in the same moment as the record is added, but the sooner its updated, the better. We were considering what can be the best way to do it:
1) First we thought of a 'before insert' trigger on tab1, so we can update the record - but that design was vetted out by our Architect, since the organization doesn't allow use of database triggers(don't know why, but that is a restriction, we have been asked to live with)
2) Second we thought, we will create a stored procedure which will perform the updates to records in 'tab1'. This stored procedure will be called within an long-running loop from a shell script. After every iteration there will be a pause of lets say 3 secs and then next loop will kick off, which will again call the stored proc. So this job will run 12 AM to 11:59 PM and then restarted every night.
My question is - is there a database only solution to this? Any other solutions are also welcome, but simplicity of design will be a huge plus. One colleague was wondering if there is a 'trigger-like' solution, which will perform the job within the database itself - so we don't have to write a shell script.
Any pointers will be appreciated!

Triggers The obvious solution.
DBMS_SCHEDULER Another obvious solution.
Continuous Query Notification This would be a "trigger-like" solution. It's meant to call an application when the results of a specific query would be different. But you can call PL/SQL instead of an application, and the query could be a simple select * from tab1; which would fire on any table changes. Normally I'd hope an architect would be to look at this solution and say, "a trigger would be a lot simpler".
DBMS_JOBS This is the old version of DBMS_SCHEDULER and is not as good. But it's different and maybe it won't be caught as an unauthorized feature.
Ignore the Architect The problem isn't that he disapproved of using triggers or jobs; there may be legitimate reasons to ban those technologies. The problem is that he rejected a sound idea without clearly articulating why it wasn't allowed. If he understood databases, or cared about your project, or acted like a professional, he would have said something like, "Oh, I'm sorry, I know that's the typical way to do this, but we don't allow it because of X, Y, Z."

To answer your questions:
Q: Is there a database only solution to this?
Unlikely, given all the limitations on your architecture.
Q: Any other solutions are also welcomed
It seems your likely solution is to have your application handle what would normally be handled by a trigger or stored procedure. Just do it all in one transaction.

Related

PL/SQL Developer statements sometimes do not commit or "stick"

I apologize if this is too vague, but it is a random issue that occurs with many types of statements. Google and Stack Overflow searches have failed me. Here is what I am experiencing, I hope that someone out there has seen or at least heard of this happening and possibly knows of a solution.
From time to time, with no apparent rhyme or reason, statements that I run through PL/SQL Developer against our Oracle databases do not "stick". Last week I ran an update on table A, a commit for the update statement, then a truncate on table B and an insert to table B followed by another commit. Everything seemed to work fine, as in I received no errors. I was, of course, able to query the changes and see that they were made. However, upon logging out and then back in, the changes had not been committed. Even the truncate command had not worked "stuck" - and truncates do not need a commit performed.
Some details that may be helpful: I am logging into the database server through PL/SQL on a shared account that is used by my team only to gain access to the schema (multiple schemas on each server, each schema has one shared login/PW). Of the 12 people on my team, I am the only one experiencing this issue. I have asked our database administration team to investigate my profile setup and have been told that my profile looks the same as my teammates' profiles. We are forced to go through Citrix to connect to our production database servers. I can only have one instance of PL/SQL open at any time through Citrix, so I typically have PL/SQL connected to several schemas, but I have never been running SQL on more than one schema simultaneously. I'm not even sure if that's possible, but I thought I would mention it. I typically have 3-4 windows open within PL/SQL, each connected to a different schema.
My manager was directly involved in a case where something similar to this happened. I ran four update commands, and committed each one in between; then he ran a select statement only to find that my updates had not actually committed.
I hope that one of my fellow Overflowers' has seen or heard of this issue, or at least may be able to provide me with a direction to follow to attempt to get to the bottom of this.
"it has begun to reflect poorly on me and damage my reputation in the company."
What would really reflect poorly on you would be you believing that an Oracle RDBMS is a magical or random device, or, even worse, sentient and conducting a personal vendetta against you. Computers may seem vindictive but that is always us projecting onto them ;-)
The way to burnish your reputation would be through an informed investigation of the situation. Databases do not randomly lose transactions. So, what is going on?
Possible culprits:
Triggers: does table A have an UPDATE trigger which suppresses some of your SQL?
Synonyms: are tables A and B really the tables you think they are?
Ownership: are these tables in another schema which has row level security enabled (although that should through an error message if you violate a policy)?
PL/SQL Developer configuration: is the IDE hiding error messages or are you not spotting them?
Object types: are tables A and B really tables? Could they be views with INSTEAD OF triggers suppressing some of your SQL?
Object types: or could A and B be materialized views and your session has QUERY_REWRITE_INTEGRITY=stale_tolerated?
If that last one seems a bit of a stretch there other similarly esoteric explanations, involving data flashback, pipelined functions and other malarky. This a category of explanation which indicates a colleague is pranking you.
How to proceed:
Try different tools. SQL*Plus (or the new SQL Command Line) may produce a different outcome. Rule out PL/SQL Developer.
Write some test cases. Strive to establish reproducible test cases: given a certain set-up this SQL statement always leads to a given outcome (SQL always sticks or always does not).
Eliminate bugs or "funnies" in the queries you use to check the results.
Use the data dictionary to understand the characteristics and associated objects of the troublesome tables. You need to understand what causes the different outcomes. What distinguishes a row where the UPDATE holds compared to one where it does not?
I have used PL/SQL Developer for over a decade and I have never known it silently undo successful truncate operations. If it can do that, AA should add it as a menu item. It seems more likely that you ran the commands against the wrong database connection.
I can feel your frustration, sorry you're going through this. I am surprised, however, that at a large company, your change control process is like this. I don't work for a large multi-national company, but any changes done to a production database are first approved by management and run by the DBAs (or in your case, your team). Every script that is run does a few things:
Lists the database instance information its connecting to. For example:
select host_name, instance_name, version, startup_time from v$instance;
Spools the output to a file (the DBAs typically use sqlplus, but I'm sure PL/SQL Developer can do the same)
Shows the current date and time (in the beginning and end of the script)
The output file is saved to a change control server (the directory structure makes it easy to pull any changes for a given instance and/or given timeframe)
Exits on any errors:
WHENEVER SQLERROR EXIT SQL.SQLCODE
Any additional checks that need to be run post script (select counts, etc)
Shows each command that is being run (set echo on), including the commits!
All of this would allow you to not only verify that the script was run successfully, but would allow you to CYOA. Perhaps you can talk with your team about putting some of this in place in your own environment. Hope that helps.
I have no way of knowing if my issue is fixed or not, but here is what I've done:
1. I contacted our company's Citrix team to request that they give my team the ability to have several instances of PL/SQL open. This has been done and so will eliminate the need for one instance with multiple DB connections.
2. I contacted the DBA's and had them remove my old profile, then create a new one with a new username.
So far, all SQL I've run under these new conditions has been just fine. However, I have no way of recreating the issue I'm experiencing so I am just continuing on about my business and hoping for the best.
Should I find a few months from now that I have not experienced this issue again I will update this post in case anyone else experiences it.
Thank you all for the accusations of operator error (screenshots prove that this is not operator error but why should you believe me when my own co-workers have accused me of faking the screenshots) and for the moral support.

ORACLE - Which is better to generate a large resultset of records, View, SP, or Function

I recently working with Oracle database to generate some reports. What I need is to get result sets of specific records (only SELECT statement), sometimes are large records, to be used for generating the report in excel file.
At first, the reports are queried in Views but some of them are slow (have some complex subqueries). I was asked to increase the performance and also fixed some field mapping. I also want to tidy things up, because when I query against View, I must specifically call the right column name. I want to separate the data works into database, and the web app just for passing parameters and call the right result set.
I'm new to Oracle, so which is better to do this kind of task? Using SP or Function? or in what condition that maybe View is better?
Makes no difference whether you compile your SQL in a view, SP or function. It is the SQL itself that matters.
As long as you are able to meet your requirements with the views they should be a good option. If you intend to break-up your queries into multiple ones for achieving better performance then you should go for stored procedures. If you decide to go for stored procedure then it would be advisable to create a package and bundle all the stored procedures together in the package. If your problem is performance then there may not be a silver bullet solution for the same. You will have to work on your queries and design for the same.
If the problem is performance due to complex SELECT query (queries), you can consider tuning the queries. Often you will find queries written 15-20 years ago, which do not use functionality and techniques that were introduced by Oracle in more recent versions (even if the organization spent the big bucks to buy the more recent versions - making it into a waste of money). Honestly, that may be too much of a task for you if you are new at Oracle; also, some slow queries may have been written by people just like you, many years ago - before they had a chance to learn a lot about Oracle and have experience with it.
Another thing, if the reports don't need to use the absolute current state of the underlying tables (for example, if "what was in the tables at the end of the business day yesterday" is acceptable), you can create a materialized view. It will not work any faster than a regular view, but it can run overnight (say), or every six hours, or whatever - so that the further reporting processing from there will not have to wait for the queries to complete. This is one of the main uses of materialized views.
Good luck!

How can I log changing triggers events?

My task is to make a trigger which will fire when our programmers create, alter, replace or delete triggers in database. It must log their changes to 2 datatables which I made similar to SYS.trigger$ table and added some extra info about user who made changes to them. I copied the principles of logging from already existing audit capability in ERP-system named Galaktika or Galaxy to be simple. However, I encountered a well-famous problem ORA-04089: no one can create triggers on system tables and stuck with it.
Now I'm looking for a way to gently modify my trigger according to database rules. Here is the original code:
CREATE OR REPLACE TRIGGER MRK_AlTrigger$
BEFORE DELETE OR INSERT OR UPDATE
ON SYS.TRIGGER$
REFERENCING NEW AS New OLD AS Old
FOR EACH ROW
DECLARE
Log_Rec MRK_TRIGGERS_LOG_HEADER.NREC%TYPE;
BEGIN
INSERT INTO MRK_TRIGGERS_LOG_HEADER (DATEOFCHANGE,
USERCODE,
OPERATION,
OBJ#)
VALUES (
SYSDATE,
UID,
CASE
WHEN INSERTING THEN 0
WHEN UPDATING THEN 1
WHEN DELETING THEN 2
END,
CASE
WHEN INSERTING OR UPDATING THEN :new.OBJ#
ELSE :old.OBJ#
END)
RETURNING NRec
INTO Log_Rec;
IF INSERTING OR UPDATING
THEN
INSERT INTO MRK_TRIGGERS_LOG_SPECIF (LOGLINK,
OBJ#,
TYPE#,
UPDATE$,
INSERT$,
DELETE$,
BASEOBJECT,
REFOLDNAME,
REFNEWNAME,
DEFINITION,
WHENCLAUSE,
ACTION#,
ACTIONSIZE,
ENABLED,
PROPERTY,
SYS_EVTS,
NTTRIGCOL,
NTTRIGATT,
REFPRTNAME,
ACTIONLINENO)
VALUES (Log_Rec,
:new.OBJ#,
:new.TYPE#,
:new.UPDATE$,
:new.INSERT$,
:new.DELETE$,
:new.BASEOBJECT,
:new.REFOLDNAME,
:new.REFNEWNAME,
:new.DEFINITION,
:new.WHENCLAUSE,
:new.ACTION#,
:new.ACTIONSIZE,
:new.ENABLED,
:new.PROPERTY,
:new.SYS_EVTS,
:new.NTTRIGCOL,
:new.NTTRIGATT,
:new.REFPRTNAME,
:new.ACTIONLINENO);
END IF;
EXCEPTION
WHEN OTHERS
THEN
-- Consider logging the error and then re-raise
RAISE;
END MRK_AlTrigger$;
/
I can also provide MRK_TRIGGERS_LOG_HEADER and MRK_TRIGGERS_LOG_SPECIF DDL, but think it is not necessary. So to make summary, here are the questions I have:
How do I modify the above source to the syntax CREATE OR REPLACE TRIGGER ON DATABASE?
Am I inventing a wheel doing this? Is there any common way to do such things? (I noticed that some tables have logging option, but consider it is for debugging purposes)
Any help will be appreciated!
UPD: I came to decision (thanks to APC) that it is better to hold different versions of code in source control and record only revision number in DB, but dream about doing this automatically.
"We despaired to appeal to our programmers' neatness so my boss
requires that there must be strong and automatic way to log changes.
And to revert them quickly if we need."
In other words, you want a technical fix for what is a political problem. This does not work. However, if you have your boss's support you can sort it out. But it will get messy.
I have been on both sides of this fence, having worked as developer and development DBA. I know from bitter experience how bad it can be if the development database - schemas, configuration parameters, reference data, etc - are not kept under control. Your developers will feel like they are flying right now, but I guarantee you they are not tracking all the changes they make in script form . So their changes are not reversible or repeatable, and when the project reaches UAT the deployment will most likely be a fiasco (buy me a beer and I'll tell you some stories).
So what to do?
Privileged access
Revoke access to SYSDBA accounts and application schema accounts from developers. Apart from anything else you may find parts of the application start to rely on privileged accesses and/or hard-coded passwords, and those are Bad Things; you don't want to include those breaches in Production.
As your developers have got accustomed to having such access this will be highly unpopular. Which is why you need your boss's support. You also must have a replacement approach in place, so leave this action until last. But make no mistake, this is the endgame.
Source control
Database schemas are software too. They are built out of programs, just like the rest of the application, only the source code is DDL and DML scripts not C# or Java. These scripts can be controlled in SVN as with any other source code.
How to organise it in source control? That can be tricky. So recognise that you have three categories of scripts:
Schema scripts which deploy objects
Configuration scripts which insert reference data, manage system parameters, etc
Build scripts which call the other scripts in the right order
Managing the schema scripts is the hardest thing to get right. I suggest you use separate scripts for each object. Also, have separate scripts for table, indexes and constraints. This means you can build all the tables without needing to arrange them in dependency order.
Handling change
The temptation will be to just control a CREATE TABLE statement (or whatever). This is a mistake. In actuality changes to the schema are just as likely to add, drop or modify columns as to introduce totally new objects. Store a CREATE TABLE statement as a baseline, then manage subsequent changes as ALTER TABLE statements.
One file for CREATE TABLE and subsequent ALTER TABLE commands, or separate ones? I'm comfortable having one script: I don't mind if a CREATE TABLE statement fails when I'm expecting the table to already be there. But this can be confusing if others will be running the scripts in say Production. So have a baseline script then separate scripts for applying changes. One alter script per object per time-box is a good compromise.
Changes from developers consist of
alter table script(s) to apply the change
a mirrored alter table script(s) to reverse the change
other scripts, e.g. DML
change reference number (which they will use in SVN)
Because you're introducing this late in the day, you'll need to be diplomatic. So make the change process light and easy to use. Also make sure you check and run the scripts as soon as possible. If you're responsive and do things quickly enough the developers won't chafe under the restricted access.
Getting to there
First of all you need to establish a baseline. Something like DBMS_METADATA will give you CREATE statements for all current objects. You need to organise them in SVN and write the build scripts. Create a toy database and get this right.
This may take some time, so remember to refresh the DDL scripts so they reflect the latest statement. If you have access to a schema comparison tool that would be very handy right now.
Next, sort out the configuration. Hopefully you already know tables contain reference data, otherwise ask the developers.
In your toy database practice zapping the database and building it from scratch. You can use something like Ant or Hudson to automate this if you're feeling adventurous, but at the very least you need some shell scripts to get a build out of SVN.
Making the transition
This is the big one. Announce the new regime to the developers. Get your boss to attend the meeting. Remind the developers to inform you of any changes they make to the database.
That night:
Take a full export with Data Pump
Drop all the application schemas.
Build the application from SVN
Reload the data - but not the data structures - with Data Pump
Hopefully you won't have any structural issues; but if the developer has made changes without telling you you'll know - and they won't have any data in the table.
Make sure you revoke the SYSDBA access as soon as possible.
The developers will need access to a set of schemas so they can write the ALTER scripts. In the developers don't have local personal databases or private schemas to test things I suggest you let them have access to that toy database to test change scripts. Alternatively you can let them keep the application owner access, because you'll be repeating the Trash'n'Rebuild exercise on a regular basis. Once they get used to the idea that they will lose any changes they don't tell you about they will knuckle down and start Doing The Right Thing.
Last word
Obviously this is a lot of vague windbaggery, lacking in solid detail. But that's politics for you.
Postscript
I was at a UKOUG event yesterday, and attended a session by a couple of smart chaps from Regdate. They have a product Source Control for Oracle which provides an interface between (say) SVN and the database. It takes a rather different approach from what I outlined above. But their approach is a sound one. Their tool automates a lot of things, and I think it might help you a lot in your current situation. I must stress that I haven't actually used this product but I think you should check it out - there's a 28 day free trial. Of course, if you don't have any money to spend then this won't help you.
you can find the desierd infos in the following trigger attributes
dictionary_obj_name
dictionary_obj_owner
ora_sysevent
here is the simple ON DATABASE trigger
CREATE OR REPLACE TRIGGER trigger_name
AFTER CREATE OR DROP ON DATABASE
BEGIN
IF dictionary_obj_type = 'TRIGGER'
THEN
INSERT INTO log_table ( trg_name, trg_owner, trg_action) VALUES (dictionary_obj_name,dictionary_obj_owner, ora_sysevent);
END IF;
END;
/

Dropping a table partition avoiding the error ORA-00054

I need your opinion in this situation. I’ll try to explain the scenario. I have a Windows service that stores data in an Oracle database periodically. The table where this data is being stored is partitioned by date (Interval-Date Range Partitioning). The database also has a dbms_scheduler job that, among other operations, truncates and drops older partitions.
This approach has been working for some time, but recently I had an ORA-00054 error. After some investigation, the error was reproduced with the following steps:
Open one sqlplus session, disable auto-commit, and insert data in the
partitioned table, without committing the changes;
Open another sqlplus session and truncate/drop an old partition (DDL
operations are automatically committed, if I’m not mistaken). We
will then get the ORA-00054 error.
There are some constraints worthy to be mentioned:
I don’t have DBA access to the database;
This is a legacy application and a complete refactoring isn’t
feasible;
So, in your opinion, is there any way of dropping these old partitions, without the risk of running into an ORA-00054 error and without the intervention of the DBA? I can just delete the data, but the number of empty partitions will grow everyday.
Many thanks in advance.
This error means somebody (or something) is working with the data in the partition you are trying to drop. That is, the lock is granted at the partition level. If nobody was using the partition your job could drop it.
Now you say this is a legacy app and you don't want to, or can't, refactor it. Fair enough. But there is clearly something not right if you have a process which is zapping data that some other process is using. I don't agree with #tbone's suggestion of just looping until the lock is released: you can't just get rid of data which somebody is using with establishing why they are still working with data that they apparently should not be using.
So, the first step is to find out what the locking session is doing. Why are they still amending this data your background job wants to retire? Here's a script which will help you establish which session has the lock.
Except that you "don't have DBA access to the database". Hmmm, that's a curly one. Basically this is not a problem which can be resolved without DBA access.
It seems like you have several issues to deal with. Unfortunately for you, they are political and architectural rather than technical, and there's not much we can do to help you further.
How about wrapping the truncate or drop in pl/sql that tries the operation in a loop, waiting x seconds between tries, for a max num of tries. Then use dbms_scheduler to call that procedure/function.
Maybe this can help. Seems to be the same issue as the one that you discribe.
(ignore the comic sans, if you can) :)

Can I substitute savepoints for starting new transactions in Oracle?

Right now the process that we're using for inserting sets of records is something like this:
(and note that "set of records" means something like a person's record along with their addresses, phone numbers, or any other joined tables).
Start a transaction.
Insert a set of records that are related.
Commit if everything was successful, roll back otherwise.
Go back to step 1 for the next set of records.
Should we be doing something more like this?
Start a transaction at the beginning of the script
Start a save point for each set of records.
Insert a set of related records.
Roll back to the savepoint if there is an error, go on if everything is successful.
Commit the transaction at the beginning of the script.
After having some issues with ORA-01555 and reading a few Ask Tom articles (like this one), I'm thinking about trying out the second process. Of course, as Tom points out, starting a new transaction is something that should be defined by business needs. Is the second process worth trying out, or is it a bad idea?
A transaction should be a meaningful Unit Of Work. But what constitutes a Unit Of Work depends upon context. In an OLTP system a Unit Of Work would be a single Person, along with their address information, etc. But it sounds as if you are implementing some form of batch processing, which is loading lots of Persons.
If you are having problems with ORA-1555 it is almost certainly because you are have a long running query supplying data which is being updated by other transactions. Committing inside your loop contributes to the cyclical use of UNDO segments, and so will tend to increase the likelihood that the segments you are relying on to provide read consistency will have been reused. So, not doing that is probably a good idea.
Whether using SAVEPOINTs is the solution is a different matter. I'm not sure what advantage that would give you in your situation. As you are working with Oracle10g perhaps you should consider using bulk DML error logging instead.
Alternatively you might wish to rewrite the driving query so that it works with smaller chunks of data. Without knowing more about the specifics of your process I can't give specific advice. But in general, instead of opening one cursor for 10000 records it might be better to open it twenty times for 500 rows a pop. The other thing to consider is whether the insertion process can be made more efficient, say by using bulk collection and FORALL.
Some thoughts...
Seems to me one of the points of the asktom link was to size your rollback/undo appropriately to avoid the 1555's. Is there some reason this is not possible? As he points out, it's far cheaper to buy disk than it is to write/maintain code to handle getting around rollback limitations (although I had to do a double-take after reading the $250 pricetag for a 36Gb drive - that thread started in 2002! Good illustration of Moore's Law!)
This link (Burleson) shows one possible issue with savepoints.
Is your transaction in actuality steps 2,3, and 5 in your second scenario? If so, that's what I'd do - commit each transaction. Sounds a bit to me like scenario 1 is a collection of transactions rolled into one?

Resources