oracle rollback changes through several transactions - oracle

Is it possible to track changes made in session transactions? I need somehow to track all changes that are made in the my session. That is necessary for testing purpose - after test is finished I need to remove all changes made during this test, so I will be able to run this test again without changes.

You have several options to deal with this situation - since you don't provide much detail I can only can give some general pointers:
temporary tables (session-specific vsersus global, you can decide to preserve or automatically throw away) see http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tables003.htm
Flashback area - this one can rollback the whole DB to a specific point in time and thus reverse all change across several transactions see http://www.oracle.com/technetwork/database/features/availability/flashback-overview-082751.html
create "prepare" scripts for your test scenarios which reset the DB to a known state before every test

There are many things you can do with oracle as an administrator, especially if your test database is on a filesystem that supports snapshots.
However, if you're looking at this from a unit test perspective purely as a developer, the safest/cleanest way to handle something like this is to:
truncate the tables involved in the test
load the fixture/test/known state data
run your tests

Related

Oracle database as a single synchronization point for two separate web applications

I am considering using an Oracle database to synchronize concurrent operations from two or more web applications on separate servers. The database is the single infrastructure element in common for those applications.
There is a good chance that two or more applications will attempt to perform the same operation at the exact same moment (cron invoked). I want to use the database to let one application decide that it will be the one which will do the work, and that the others will not do it at all.
The general idea is to perform a somehow-atomic and visible to all connections select/insert with node's ID. Only node which has the same id as the first inserted node ID returned by select would be do the work.
It was suggested to me that a merge statement can be of use here. However, after doing some research, I found a discussion which states that the merge statement is not designed to be called
Another option is to lock a table. By definition, only one node will be able to lock the server and do the insert, then select. After the lock is removed, other instances will see the inserted value and will not perform work.
What other solutions would you consider? I frown on workarounds with random delays, or even using oracle exceptions to notify a node that it should not do the work. I'd prefer a clean solution.
I ended up going with SELECT FOR UPDATE. It works as intended. It is important to remember to commit the transaction as soon as the needed update is made, so that other nodes don't hang waiting for the value.

How can I log changing triggers events?

My task is to make a trigger which will fire when our programmers create, alter, replace or delete triggers in database. It must log their changes to 2 datatables which I made similar to SYS.trigger$ table and added some extra info about user who made changes to them. I copied the principles of logging from already existing audit capability in ERP-system named Galaktika or Galaxy to be simple. However, I encountered a well-famous problem ORA-04089: no one can create triggers on system tables and stuck with it.
Now I'm looking for a way to gently modify my trigger according to database rules. Here is the original code:
CREATE OR REPLACE TRIGGER MRK_AlTrigger$
BEFORE DELETE OR INSERT OR UPDATE
ON SYS.TRIGGER$
REFERENCING NEW AS New OLD AS Old
FOR EACH ROW
DECLARE
Log_Rec MRK_TRIGGERS_LOG_HEADER.NREC%TYPE;
BEGIN
INSERT INTO MRK_TRIGGERS_LOG_HEADER (DATEOFCHANGE,
USERCODE,
OPERATION,
OBJ#)
VALUES (
SYSDATE,
UID,
CASE
WHEN INSERTING THEN 0
WHEN UPDATING THEN 1
WHEN DELETING THEN 2
END,
CASE
WHEN INSERTING OR UPDATING THEN :new.OBJ#
ELSE :old.OBJ#
END)
RETURNING NRec
INTO Log_Rec;
IF INSERTING OR UPDATING
THEN
INSERT INTO MRK_TRIGGERS_LOG_SPECIF (LOGLINK,
OBJ#,
TYPE#,
UPDATE$,
INSERT$,
DELETE$,
BASEOBJECT,
REFOLDNAME,
REFNEWNAME,
DEFINITION,
WHENCLAUSE,
ACTION#,
ACTIONSIZE,
ENABLED,
PROPERTY,
SYS_EVTS,
NTTRIGCOL,
NTTRIGATT,
REFPRTNAME,
ACTIONLINENO)
VALUES (Log_Rec,
:new.OBJ#,
:new.TYPE#,
:new.UPDATE$,
:new.INSERT$,
:new.DELETE$,
:new.BASEOBJECT,
:new.REFOLDNAME,
:new.REFNEWNAME,
:new.DEFINITION,
:new.WHENCLAUSE,
:new.ACTION#,
:new.ACTIONSIZE,
:new.ENABLED,
:new.PROPERTY,
:new.SYS_EVTS,
:new.NTTRIGCOL,
:new.NTTRIGATT,
:new.REFPRTNAME,
:new.ACTIONLINENO);
END IF;
EXCEPTION
WHEN OTHERS
THEN
-- Consider logging the error and then re-raise
RAISE;
END MRK_AlTrigger$;
/
I can also provide MRK_TRIGGERS_LOG_HEADER and MRK_TRIGGERS_LOG_SPECIF DDL, but think it is not necessary. So to make summary, here are the questions I have:
How do I modify the above source to the syntax CREATE OR REPLACE TRIGGER ON DATABASE?
Am I inventing a wheel doing this? Is there any common way to do such things? (I noticed that some tables have logging option, but consider it is for debugging purposes)
Any help will be appreciated!
UPD: I came to decision (thanks to APC) that it is better to hold different versions of code in source control and record only revision number in DB, but dream about doing this automatically.
"We despaired to appeal to our programmers' neatness so my boss
requires that there must be strong and automatic way to log changes.
And to revert them quickly if we need."
In other words, you want a technical fix for what is a political problem. This does not work. However, if you have your boss's support you can sort it out. But it will get messy.
I have been on both sides of this fence, having worked as developer and development DBA. I know from bitter experience how bad it can be if the development database - schemas, configuration parameters, reference data, etc - are not kept under control. Your developers will feel like they are flying right now, but I guarantee you they are not tracking all the changes they make in script form . So their changes are not reversible or repeatable, and when the project reaches UAT the deployment will most likely be a fiasco (buy me a beer and I'll tell you some stories).
So what to do?
Privileged access
Revoke access to SYSDBA accounts and application schema accounts from developers. Apart from anything else you may find parts of the application start to rely on privileged accesses and/or hard-coded passwords, and those are Bad Things; you don't want to include those breaches in Production.
As your developers have got accustomed to having such access this will be highly unpopular. Which is why you need your boss's support. You also must have a replacement approach in place, so leave this action until last. But make no mistake, this is the endgame.
Source control
Database schemas are software too. They are built out of programs, just like the rest of the application, only the source code is DDL and DML scripts not C# or Java. These scripts can be controlled in SVN as with any other source code.
How to organise it in source control? That can be tricky. So recognise that you have three categories of scripts:
Schema scripts which deploy objects
Configuration scripts which insert reference data, manage system parameters, etc
Build scripts which call the other scripts in the right order
Managing the schema scripts is the hardest thing to get right. I suggest you use separate scripts for each object. Also, have separate scripts for table, indexes and constraints. This means you can build all the tables without needing to arrange them in dependency order.
Handling change
The temptation will be to just control a CREATE TABLE statement (or whatever). This is a mistake. In actuality changes to the schema are just as likely to add, drop or modify columns as to introduce totally new objects. Store a CREATE TABLE statement as a baseline, then manage subsequent changes as ALTER TABLE statements.
One file for CREATE TABLE and subsequent ALTER TABLE commands, or separate ones? I'm comfortable having one script: I don't mind if a CREATE TABLE statement fails when I'm expecting the table to already be there. But this can be confusing if others will be running the scripts in say Production. So have a baseline script then separate scripts for applying changes. One alter script per object per time-box is a good compromise.
Changes from developers consist of
alter table script(s) to apply the change
a mirrored alter table script(s) to reverse the change
other scripts, e.g. DML
change reference number (which they will use in SVN)
Because you're introducing this late in the day, you'll need to be diplomatic. So make the change process light and easy to use. Also make sure you check and run the scripts as soon as possible. If you're responsive and do things quickly enough the developers won't chafe under the restricted access.
Getting to there
First of all you need to establish a baseline. Something like DBMS_METADATA will give you CREATE statements for all current objects. You need to organise them in SVN and write the build scripts. Create a toy database and get this right.
This may take some time, so remember to refresh the DDL scripts so they reflect the latest statement. If you have access to a schema comparison tool that would be very handy right now.
Next, sort out the configuration. Hopefully you already know tables contain reference data, otherwise ask the developers.
In your toy database practice zapping the database and building it from scratch. You can use something like Ant or Hudson to automate this if you're feeling adventurous, but at the very least you need some shell scripts to get a build out of SVN.
Making the transition
This is the big one. Announce the new regime to the developers. Get your boss to attend the meeting. Remind the developers to inform you of any changes they make to the database.
That night:
Take a full export with Data Pump
Drop all the application schemas.
Build the application from SVN
Reload the data - but not the data structures - with Data Pump
Hopefully you won't have any structural issues; but if the developer has made changes without telling you you'll know - and they won't have any data in the table.
Make sure you revoke the SYSDBA access as soon as possible.
The developers will need access to a set of schemas so they can write the ALTER scripts. In the developers don't have local personal databases or private schemas to test things I suggest you let them have access to that toy database to test change scripts. Alternatively you can let them keep the application owner access, because you'll be repeating the Trash'n'Rebuild exercise on a regular basis. Once they get used to the idea that they will lose any changes they don't tell you about they will knuckle down and start Doing The Right Thing.
Last word
Obviously this is a lot of vague windbaggery, lacking in solid detail. But that's politics for you.
Postscript
I was at a UKOUG event yesterday, and attended a session by a couple of smart chaps from Regdate. They have a product Source Control for Oracle which provides an interface between (say) SVN and the database. It takes a rather different approach from what I outlined above. But their approach is a sound one. Their tool automates a lot of things, and I think it might help you a lot in your current situation. I must stress that I haven't actually used this product but I think you should check it out - there's a 28 day free trial. Of course, if you don't have any money to spend then this won't help you.
you can find the desierd infos in the following trigger attributes
dictionary_obj_name
dictionary_obj_owner
ora_sysevent
here is the simple ON DATABASE trigger
CREATE OR REPLACE TRIGGER trigger_name
AFTER CREATE OR DROP ON DATABASE
BEGIN
IF dictionary_obj_type = 'TRIGGER'
THEN
INSERT INTO log_table ( trg_name, trg_owner, trg_action) VALUES (dictionary_obj_name,dictionary_obj_owner, ora_sysevent);
END IF;
END;
/

Difference between truncation, transaction and deletion database strategies

What is the difference between truncation, transaction and deletion database strategies when using Rspec? I can't find any resources explaining this. I read the Database Cleaner readme but it doesn't explain what each of these do.
Why do we have to use truncation strategy for Capybara? Do I have to clean up my database when testing or can I disable it. I dont understand why I should clean up my database after each test case, wouldn't it just slow down testing?
The database cleaning strategies refer to database terminology. I.e. those terms come from the (SQL) database world, so people generally familiar with database terminology will know what they mean.
The examples below refer to SQL definitions. DatabaseCleaner however supports other non-SQL types of databases too, but generally the definitions will be the same or similar.
Deletion
This means the database tables are cleaned using the SQL DELETE FROM statement. This is usually slower than truncation, but may have other advantages instead.
Truncation
This means the database tables are cleaned using the TRUNCATE TABLE statement. This will simply empty the table immediately, without deleting the table structure itself or deleting records individually.
Transaction
This means using BEGIN TRANSACTION statements coupled with ROLLBACK to roll back a sequence of previous database operations. Think of it as an "undo button" for databases. I would think this is the most frequently used cleaning method, and probably the fastest since changes need not be directly committed to the DB.
Example discussion: Rspec, Cucumber: best speed database clean strategy
Reason for truncation strategy with Capybara
The best explanation was found in the Capybara docs themselves:
# Transactional fixtures do not work with Selenium tests, because Capybara
# uses a separate server thread, which the transactions would be hidden
# from. We hence use DatabaseCleaner to truncate our test database.
Cleaning requirements
You do not necessarily have to clean your database after each test case. However you need to be aware of side effects this could have. I.e. if you create, modify, or delete some records in one step, will the other steps be affected by this?
Normally RSpec runs with transactional fixtures turned on, so you will never notice this when running RSpec - it will simply keep the database automatically clean for you:
https://www.relishapp.com/rspec/rspec-rails/v/2-10/docs/transactions

How to achieve test isolation testing Oracle PL/SQL?

In Java projects, JUnit tests do a setup, test, teardown. Even when mocking out a real db using an in-memory db, you usually rollback the transaction or drop the db from memory and recreate it between each test. This gives you test isolation since one test does not leave artifacts in an environment that could effect the next test. Each test starts out in a known state and cannot bleed over into another one.
Now I've got an Oracle db build that creates 1100 tables and 400K of code - a lot of pl/sql packages. I'd like to not only test the db install (full - create from scratch, partial - upgrade from a previous db, etc) and make sure all the tables, and other objects are in the state I expect after the install, but ALSO run tests on the pl/sql (I'm not sure how I'd do the former exactly - suggestions?).
I'd like this all to run from Jenkins for CI so that development errors are caught via regression testing.
Firstly, I have to use an enterprise version instead of XE because of XE doesn't support java SPs and a dependency on Oracle Web Flow. Even if I eliminate those dependencies, the build typically takes 1.5 hours just to load (full build).
So how do you acheive test isolation in this environment? Use transactions for each test and roll them back? OK, what about those pl/sql procedures that have commits in them?
I thought about backup and recovery to reset the db after each test, or recreate the entire db between each tests (too drastic). Both are impractical since it takes over an hour to install it. Doing so for each test is overkill and insane.
Is there a way to draw a line in the sand in the db schema(s) and then roll it back to that point in time? Sorta like a big 'undo' feature. Something besides expdp/impdp or rman. Perhaps the whole approach is off. Suggestions? How have others done this?
For CI or a small production upgrade window, the whold test suite has to run with in a reasonable time (30 mins would be ideal).
Are there products that might help acheive this 'undo' ability?
Kevin McCormack published an article on The Server Labs Blog about continuous integration testing for PL/SQL using Maven and Hudson. Check it out. The key ingredient for the testing component is Steven Feuerstein's utPlsql framework, which is an implementation of JUnit's concepts in PL/SQL.
The need to reset our test fixtures is one of the big issues with PL/SQL testing. One thing which helps is to observe good practice and avoid commits in stored procedures: transactional control should be restricted to only the outermost parts of the call stack. For those programs which simply must issue commits (perhaps implicitly because they execute DDL) there is always a test fixture which issues DELETE statements. Handling relational integrity makes those quite tricky to code.
An alternative approach is to use Data Pump. You appear to discard impdp but Oracle also provides PL/SQL API for it, DBMS_DATAPUMP. I suggest it here because it provides the ability to trash any existing data prior to running an import. So we can have an exported data set as our test fixture; to execute a SetUp is a matter of running a Data Pump job. You don't need do do anything in the TearDown, because that tidying up happens at the start of the SetUp.
In Oracle you can use Flashback Technology to restore the serve to a point back in time.
http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmflash.htm
1.5 hours seems like a very long time for 1100 tables and 400K of code. I obviously don't know the details of your envrionment, but based on my experience I bet you can shrink that to 5 to 10 minutes. Here are the two main installation script problems I've seen with Oracle:
1. Operations are broken into tiny pieces
The more steps you have the more overhead there will be. For example, you want to consolidate code like this as much as possible:
Replace:
create table x(a number, b number, c number);
alter table x modify a not null;
alter table x modify b not null;
alter table x modify c not null;
With:
create table x(a number not null, b number not null, c number not null);
Replace:
insert into x values (1,2,3);
insert into x values (4,5,6);
insert into x values (7,8,9);
With:
insert into x
select 1,2,3 from dual union all
select 4,5,6 from dual union all
select 7,8,9 from dual;
This is especially true if you run your script and your database in different locations. That tiny network lag starts to matter when you multiply it by 10,000. Every Oracle SQL tool I know of will send one command at a time.
2. Developers have to share a database
This is more of a long-term process solution than a technical fix, but you have to start sometime. Most places that use Oracle only have it installed on a few servers. Then it becomes a scarce resource that must be carefully managed. People fight over it, roles are unclear, and things don't get fixed.
If that's your environment, stop the madness and install Oracle on every laptop right now. Spend a few hundred dollars and give everyone personal edition (which has the same features as Enterprise Edition). Give everyone the tools they need and continous improvment will eventually fix your problems.
Also, for a schema "undo", you may want to look into transportable tablespaces. I've never used it, but supposedly it's a much faster way of installing a system - just copy and paste files instead of importing. Similiarly, perhaps some type of virtualization can help - create a snapshot of the OS and database.
Although Oracle Flashback is an Enterprise Edition feature the technology it is based on is available in all editions namely Oracle Log Miner:
http://docs.oracle.com/cd/B28359_01/server.111/b28319/logminer.htm#i1016535
I would be interested to know whether anybody has used this to provide test isolation for functional tests i.e. querying v$LOGMNR_CONTENTS to get a list of UNDO statements from a point of time corresponding to the beginning of the test.
The database needs to be in archive mode and in the junit test case a method annotated with
#Startup
would call DBMS_LOGMNR.START_LOGMNR. The test would run and then in a method annotated with
#Teardown
would be query v$LOGMNR_CONTENTS to find the list of UNDO statements. These would then be executed via JDBC. In fact the querying and execution of the UNDO statements could be extracted into a PLSQL stored procedure. The order that the statements executed would have to be considered.
I think this has the benefit allowing the transaction to commit which is where an awful lot of bugs can creep in i.e. referential integrity, primary key violations etc.

How to disable oracle cache for performance tests

I'm trying to test the utility of a new summary table for my data.
So I've created two procedures to fetch the data of a certain interval, each one using a different table source. So on my C# console application I just call one or another. The problem start when I want to repeat this several times to have a good pattern of response time.
I got something like this: 1199,84,81,81,81,81,82,80,80,81,81,80,81,91,80,80,81,80
Probably my Oracle 10g is making an inappropriate caching.
How I can solve this?
EDIT: See this thread on asktom, which describes how and why not to do this.
If you are in a test environment, you can put your tablespace offline and online again:
ALTER TABLESPACE <tablespace_name> OFFLINE;
ALTER TABLESPACE <tablespace_name> ONLINE;
Or you can try
ALTER SYSTEM FLUSH BUFFER_CACHE;
but again only on test environment.
When you test on your "real" system, the times you get after first call (those using cached data) might be more interesting, as you will have cached data. Call the procedure twice, and only consider the performance results you get in subsequent executions.
Probably my Oracle 10g is making a
inappropriate caching.
Actually it seems like Oracle is doing some entirely appropriate caching. If these tables are going to be used a lot then you would hope to have them in cache most of the time.
edit
In a comment on Peter's response Luis said
flushing before the call I got some
interesting results like:
1370,354,391,375,352,511,390,375,326,335,435,334,334,328,337,314,417,377,384,367,393.
These findings are "interesting" because the flush means the calls take a bit longer than when the rows are in the DB cache but not as long as the first call. This is almost certainly because the server has stored the physical records in its physical cache. The only way to avoid that, to truely run against an empty cache is to reboot the server before every test.
Alternatively learn to tune queries properly. Understanding how the database works is a good start. And EXPLAIN PLAN is a better tuning aid than the wall-clock. Find out more.

Resources