I apologize if this is too vague, but it is a random issue that occurs with many types of statements. Google and Stack Overflow searches have failed me. Here is what I am experiencing, I hope that someone out there has seen or at least heard of this happening and possibly knows of a solution.
From time to time, with no apparent rhyme or reason, statements that I run through PL/SQL Developer against our Oracle databases do not "stick". Last week I ran an update on table A, a commit for the update statement, then a truncate on table B and an insert to table B followed by another commit. Everything seemed to work fine, as in I received no errors. I was, of course, able to query the changes and see that they were made. However, upon logging out and then back in, the changes had not been committed. Even the truncate command had not worked "stuck" - and truncates do not need a commit performed.
Some details that may be helpful: I am logging into the database server through PL/SQL on a shared account that is used by my team only to gain access to the schema (multiple schemas on each server, each schema has one shared login/PW). Of the 12 people on my team, I am the only one experiencing this issue. I have asked our database administration team to investigate my profile setup and have been told that my profile looks the same as my teammates' profiles. We are forced to go through Citrix to connect to our production database servers. I can only have one instance of PL/SQL open at any time through Citrix, so I typically have PL/SQL connected to several schemas, but I have never been running SQL on more than one schema simultaneously. I'm not even sure if that's possible, but I thought I would mention it. I typically have 3-4 windows open within PL/SQL, each connected to a different schema.
My manager was directly involved in a case where something similar to this happened. I ran four update commands, and committed each one in between; then he ran a select statement only to find that my updates had not actually committed.
I hope that one of my fellow Overflowers' has seen or heard of this issue, or at least may be able to provide me with a direction to follow to attempt to get to the bottom of this.
"it has begun to reflect poorly on me and damage my reputation in the company."
What would really reflect poorly on you would be you believing that an Oracle RDBMS is a magical or random device, or, even worse, sentient and conducting a personal vendetta against you. Computers may seem vindictive but that is always us projecting onto them ;-)
The way to burnish your reputation would be through an informed investigation of the situation. Databases do not randomly lose transactions. So, what is going on?
Possible culprits:
Triggers: does table A have an UPDATE trigger which suppresses some of your SQL?
Synonyms: are tables A and B really the tables you think they are?
Ownership: are these tables in another schema which has row level security enabled (although that should through an error message if you violate a policy)?
PL/SQL Developer configuration: is the IDE hiding error messages or are you not spotting them?
Object types: are tables A and B really tables? Could they be views with INSTEAD OF triggers suppressing some of your SQL?
Object types: or could A and B be materialized views and your session has QUERY_REWRITE_INTEGRITY=stale_tolerated?
If that last one seems a bit of a stretch there other similarly esoteric explanations, involving data flashback, pipelined functions and other malarky. This a category of explanation which indicates a colleague is pranking you.
How to proceed:
Try different tools. SQL*Plus (or the new SQL Command Line) may produce a different outcome. Rule out PL/SQL Developer.
Write some test cases. Strive to establish reproducible test cases: given a certain set-up this SQL statement always leads to a given outcome (SQL always sticks or always does not).
Eliminate bugs or "funnies" in the queries you use to check the results.
Use the data dictionary to understand the characteristics and associated objects of the troublesome tables. You need to understand what causes the different outcomes. What distinguishes a row where the UPDATE holds compared to one where it does not?
I have used PL/SQL Developer for over a decade and I have never known it silently undo successful truncate operations. If it can do that, AA should add it as a menu item. It seems more likely that you ran the commands against the wrong database connection.
I can feel your frustration, sorry you're going through this. I am surprised, however, that at a large company, your change control process is like this. I don't work for a large multi-national company, but any changes done to a production database are first approved by management and run by the DBAs (or in your case, your team). Every script that is run does a few things:
Lists the database instance information its connecting to. For example:
select host_name, instance_name, version, startup_time from v$instance;
Spools the output to a file (the DBAs typically use sqlplus, but I'm sure PL/SQL Developer can do the same)
Shows the current date and time (in the beginning and end of the script)
The output file is saved to a change control server (the directory structure makes it easy to pull any changes for a given instance and/or given timeframe)
Exits on any errors:
WHENEVER SQLERROR EXIT SQL.SQLCODE
Any additional checks that need to be run post script (select counts, etc)
Shows each command that is being run (set echo on), including the commits!
All of this would allow you to not only verify that the script was run successfully, but would allow you to CYOA. Perhaps you can talk with your team about putting some of this in place in your own environment. Hope that helps.
I have no way of knowing if my issue is fixed or not, but here is what I've done:
1. I contacted our company's Citrix team to request that they give my team the ability to have several instances of PL/SQL open. This has been done and so will eliminate the need for one instance with multiple DB connections.
2. I contacted the DBA's and had them remove my old profile, then create a new one with a new username.
So far, all SQL I've run under these new conditions has been just fine. However, I have no way of recreating the issue I'm experiencing so I am just continuing on about my business and hoping for the best.
Should I find a few months from now that I have not experienced this issue again I will update this post in case anyone else experiences it.
Thank you all for the accusations of operator error (screenshots prove that this is not operator error but why should you believe me when my own co-workers have accused me of faking the screenshots) and for the moral support.
Related
I'm trying to work out why my Oracle 19c database is "suddenly" experiencing high commit waits. Looking in V$ACTIVE_SESSION_HISTORY and DBA_HIST_ACTIVE_SESS_HISTORY shows me that lots of sessions are waiting on "log file sync" and the blocking session is the LGWR process. Not a sign of a problem in itself, but a couple of months ago (before a recent set of product updates) it wasn't doing that, so I'm trying to understand what has changed. Either some code changes made over the last 2 months have caused this, or potentially the I/O system is experiencing a problem.
Because it's an OLTP system we have many different types of transaction, and I'm finding it difficult to filter out the noise from the performance views. What I'd like to be able to do is identify the sessions which are doing most commits, and also the sessions that are doing the "largest" commits, and then I can trace these back to see which pieces of code are responsible etc.
I would therefore like to be able to create a table such as this:
SESSION_ID
SESSION_SERIAL#
COMMIT_COUNT
COMMIT_SIZE
1
12345
3
132436
For commit size, I guessed I would need to use something like the "wait time" as an approximation and was hoping that the TM_DELTA_DB_TIME column would help me out here, but not sure how to measure the number of commits. I had hoped that the XID column would allow me to see the transaction boundaries but it's usually NULL.
And now I've stopped to question why there isn't an easier way to do this, and whether I'm going about it the wrong way. Surely I can't be the only person to want more in-depth understanding of the commit activity within their Oracle database. Or am I asking for data that doesn't exist in the views?
If anybody has some tips for where to look I would be very grateful!
We're supposed to update some columns in a table 'tab1' with some values(which can be picked up from a different table 'tab2'). Now 'tab1' is getting new records inserted almost every few seconds(from MQ by a different system).
We want to design a solution that will update 'tab1' as soon as there is a new record added to 'tab1'. It doesn't have to be done in the same moment as the record is added, but the sooner its updated, the better. We were considering what can be the best way to do it:
1) First we thought of a 'before insert' trigger on tab1, so we can update the record - but that design was vetted out by our Architect, since the organization doesn't allow use of database triggers(don't know why, but that is a restriction, we have been asked to live with)
2) Second we thought, we will create a stored procedure which will perform the updates to records in 'tab1'. This stored procedure will be called within an long-running loop from a shell script. After every iteration there will be a pause of lets say 3 secs and then next loop will kick off, which will again call the stored proc. So this job will run 12 AM to 11:59 PM and then restarted every night.
My question is - is there a database only solution to this? Any other solutions are also welcome, but simplicity of design will be a huge plus. One colleague was wondering if there is a 'trigger-like' solution, which will perform the job within the database itself - so we don't have to write a shell script.
Any pointers will be appreciated!
Triggers The obvious solution.
DBMS_SCHEDULER Another obvious solution.
Continuous Query Notification This would be a "trigger-like" solution. It's meant to call an application when the results of a specific query would be different. But you can call PL/SQL instead of an application, and the query could be a simple select * from tab1; which would fire on any table changes. Normally I'd hope an architect would be to look at this solution and say, "a trigger would be a lot simpler".
DBMS_JOBS This is the old version of DBMS_SCHEDULER and is not as good. But it's different and maybe it won't be caught as an unauthorized feature.
Ignore the Architect The problem isn't that he disapproved of using triggers or jobs; there may be legitimate reasons to ban those technologies. The problem is that he rejected a sound idea without clearly articulating why it wasn't allowed. If he understood databases, or cared about your project, or acted like a professional, he would have said something like, "Oh, I'm sorry, I know that's the typical way to do this, but we don't allow it because of X, Y, Z."
To answer your questions:
Q: Is there a database only solution to this?
Unlikely, given all the limitations on your architecture.
Q: Any other solutions are also welcomed
It seems your likely solution is to have your application handle what would normally be handled by a trigger or stored procedure. Just do it all in one transaction.
I have an Oracle bind query that is extremely slow (about 2 minutes) when it executes in my C# program but runs very quickly in SQL Developer. It has two parameters that hit the tables index:
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
Also, if I remove the bind variables and create dynamic sql, it runs just like it does in SQL Developer.
Any suggestion?
BTW, I'm using ODP.
If you are replacing the bind variables with static varibles in sql developer, then you're not really running the same test. Make sure you use the bind varibles, and if it's also slow you're just getting bit by a bad cached execution plan. Updating the stats on that table should resolve it.
However if you are actually using bind variables in sql developers then keep reading. The TLDR version is that parameters that ODP.net run under sometimes cause a slightly more pessimistic approach. Start with updating the stats, but have your dba capture the execution plan under both scenarios and compare to confirm.
I'm reposting my answer from here: https://stackoverflow.com/a/14712992/852208
I considered flagging yours as a duplicate but your title is a little more concise since it identifies the query does run fast in sql developer. I'll welcome advice on handling in another manner.
Adding the following to your config will send odp.net tracing info to a log file:
This will probably only be helpful if you can find a large gap in time. Chances are rows are actually coming in, just at a slower pace.
Try adding "enlist=false" to your connection string. I don't consider this a solution since it effecitively disables distributed transactions but it should help you isolate the issue. You can get a little bit more information from an oracle forumns post:
From an ODP perspective, all we can really point out is that the
behavior occurs when OCI_ATR_EXTERNAL_NAME and OCI_ATR_INTERNAL_NAME
are set on the underlying OCI connection (which is what happens when
distrib tx support is enabled).
I'd guess what you're not seeing is that the execution plan is actually different (meaning the actual performance hit is actually occuring on the server) between the odp.net call and the sql developer call. Have your dba trace the connection and obtain execution plans from both the odp.net call and the call straight from SQL Developer (or with the enlist=false parameter).
If you confirm different execution plans or if you want to take a preemptive shot in the dark, update the statistics on the related tables. In my case this corrected the issue, indicating that execution plan generation doesn't really follow different rules for the different types of connections but that the cost analysis is just slighly more pesimistic when a distributed transaction might be involved. Query hints to force an execution plan are also an option but only as a last resort.
Finally, it could be a network issue. If your odp.net install is using a fresh oracle home (which I would expect unless you did some post-install configuring) then the tnsnames.ora could be different. Host names in tnsnams might not be fully qualified, creating more delays resolving the server. I'd only expect the first attempt (and not subsequent attempts) to be slow in this case so I don't think it's the issue but I thought it should be mentioned.
Are the parameters bound to the correct data type in C#? Are the columns key1 and key2 numbers, but the parameters :key1 and :key2 are strings? If so, the query may return the correct results but will require implicit conversion. That implicit conversion is like using a function to_char(key1), which prevents an index from being used.
Please also check what is the number of rows returned by the query. If the number is big then possibly C# is fetching all rows and the other tool first pocket only. Fetching all rows may require many more disk reads in that case, which is slower. To check this try to run in SQL Developer:
SELECT COUNT(*) FROM (
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
)
The above query should fetch the maximum number of database blocks.
Nice tool in such cases is tkprof utility which shows SQL execution plan which may be different in cases above (however it should not be).
It is also possible that you have accidentally connected to different databases. In such cases it is nice to compare results of queries.
Since you are raising "Bind is slow" I assume you have checked the SQL without binds and it was fast. In 99% using binds makes things better. Please check if query with constants will run fast. If yes than problem may be implicit conversion of key1 or key2 column (ex. t.key1 is a number and :key1 is a string).
My task is to make a trigger which will fire when our programmers create, alter, replace or delete triggers in database. It must log their changes to 2 datatables which I made similar to SYS.trigger$ table and added some extra info about user who made changes to them. I copied the principles of logging from already existing audit capability in ERP-system named Galaktika or Galaxy to be simple. However, I encountered a well-famous problem ORA-04089: no one can create triggers on system tables and stuck with it.
Now I'm looking for a way to gently modify my trigger according to database rules. Here is the original code:
CREATE OR REPLACE TRIGGER MRK_AlTrigger$
BEFORE DELETE OR INSERT OR UPDATE
ON SYS.TRIGGER$
REFERENCING NEW AS New OLD AS Old
FOR EACH ROW
DECLARE
Log_Rec MRK_TRIGGERS_LOG_HEADER.NREC%TYPE;
BEGIN
INSERT INTO MRK_TRIGGERS_LOG_HEADER (DATEOFCHANGE,
USERCODE,
OPERATION,
OBJ#)
VALUES (
SYSDATE,
UID,
CASE
WHEN INSERTING THEN 0
WHEN UPDATING THEN 1
WHEN DELETING THEN 2
END,
CASE
WHEN INSERTING OR UPDATING THEN :new.OBJ#
ELSE :old.OBJ#
END)
RETURNING NRec
INTO Log_Rec;
IF INSERTING OR UPDATING
THEN
INSERT INTO MRK_TRIGGERS_LOG_SPECIF (LOGLINK,
OBJ#,
TYPE#,
UPDATE$,
INSERT$,
DELETE$,
BASEOBJECT,
REFOLDNAME,
REFNEWNAME,
DEFINITION,
WHENCLAUSE,
ACTION#,
ACTIONSIZE,
ENABLED,
PROPERTY,
SYS_EVTS,
NTTRIGCOL,
NTTRIGATT,
REFPRTNAME,
ACTIONLINENO)
VALUES (Log_Rec,
:new.OBJ#,
:new.TYPE#,
:new.UPDATE$,
:new.INSERT$,
:new.DELETE$,
:new.BASEOBJECT,
:new.REFOLDNAME,
:new.REFNEWNAME,
:new.DEFINITION,
:new.WHENCLAUSE,
:new.ACTION#,
:new.ACTIONSIZE,
:new.ENABLED,
:new.PROPERTY,
:new.SYS_EVTS,
:new.NTTRIGCOL,
:new.NTTRIGATT,
:new.REFPRTNAME,
:new.ACTIONLINENO);
END IF;
EXCEPTION
WHEN OTHERS
THEN
-- Consider logging the error and then re-raise
RAISE;
END MRK_AlTrigger$;
/
I can also provide MRK_TRIGGERS_LOG_HEADER and MRK_TRIGGERS_LOG_SPECIF DDL, but think it is not necessary. So to make summary, here are the questions I have:
How do I modify the above source to the syntax CREATE OR REPLACE TRIGGER ON DATABASE?
Am I inventing a wheel doing this? Is there any common way to do such things? (I noticed that some tables have logging option, but consider it is for debugging purposes)
Any help will be appreciated!
UPD: I came to decision (thanks to APC) that it is better to hold different versions of code in source control and record only revision number in DB, but dream about doing this automatically.
"We despaired to appeal to our programmers' neatness so my boss
requires that there must be strong and automatic way to log changes.
And to revert them quickly if we need."
In other words, you want a technical fix for what is a political problem. This does not work. However, if you have your boss's support you can sort it out. But it will get messy.
I have been on both sides of this fence, having worked as developer and development DBA. I know from bitter experience how bad it can be if the development database - schemas, configuration parameters, reference data, etc - are not kept under control. Your developers will feel like they are flying right now, but I guarantee you they are not tracking all the changes they make in script form . So their changes are not reversible or repeatable, and when the project reaches UAT the deployment will most likely be a fiasco (buy me a beer and I'll tell you some stories).
So what to do?
Privileged access
Revoke access to SYSDBA accounts and application schema accounts from developers. Apart from anything else you may find parts of the application start to rely on privileged accesses and/or hard-coded passwords, and those are Bad Things; you don't want to include those breaches in Production.
As your developers have got accustomed to having such access this will be highly unpopular. Which is why you need your boss's support. You also must have a replacement approach in place, so leave this action until last. But make no mistake, this is the endgame.
Source control
Database schemas are software too. They are built out of programs, just like the rest of the application, only the source code is DDL and DML scripts not C# or Java. These scripts can be controlled in SVN as with any other source code.
How to organise it in source control? That can be tricky. So recognise that you have three categories of scripts:
Schema scripts which deploy objects
Configuration scripts which insert reference data, manage system parameters, etc
Build scripts which call the other scripts in the right order
Managing the schema scripts is the hardest thing to get right. I suggest you use separate scripts for each object. Also, have separate scripts for table, indexes and constraints. This means you can build all the tables without needing to arrange them in dependency order.
Handling change
The temptation will be to just control a CREATE TABLE statement (or whatever). This is a mistake. In actuality changes to the schema are just as likely to add, drop or modify columns as to introduce totally new objects. Store a CREATE TABLE statement as a baseline, then manage subsequent changes as ALTER TABLE statements.
One file for CREATE TABLE and subsequent ALTER TABLE commands, or separate ones? I'm comfortable having one script: I don't mind if a CREATE TABLE statement fails when I'm expecting the table to already be there. But this can be confusing if others will be running the scripts in say Production. So have a baseline script then separate scripts for applying changes. One alter script per object per time-box is a good compromise.
Changes from developers consist of
alter table script(s) to apply the change
a mirrored alter table script(s) to reverse the change
other scripts, e.g. DML
change reference number (which they will use in SVN)
Because you're introducing this late in the day, you'll need to be diplomatic. So make the change process light and easy to use. Also make sure you check and run the scripts as soon as possible. If you're responsive and do things quickly enough the developers won't chafe under the restricted access.
Getting to there
First of all you need to establish a baseline. Something like DBMS_METADATA will give you CREATE statements for all current objects. You need to organise them in SVN and write the build scripts. Create a toy database and get this right.
This may take some time, so remember to refresh the DDL scripts so they reflect the latest statement. If you have access to a schema comparison tool that would be very handy right now.
Next, sort out the configuration. Hopefully you already know tables contain reference data, otherwise ask the developers.
In your toy database practice zapping the database and building it from scratch. You can use something like Ant or Hudson to automate this if you're feeling adventurous, but at the very least you need some shell scripts to get a build out of SVN.
Making the transition
This is the big one. Announce the new regime to the developers. Get your boss to attend the meeting. Remind the developers to inform you of any changes they make to the database.
That night:
Take a full export with Data Pump
Drop all the application schemas.
Build the application from SVN
Reload the data - but not the data structures - with Data Pump
Hopefully you won't have any structural issues; but if the developer has made changes without telling you you'll know - and they won't have any data in the table.
Make sure you revoke the SYSDBA access as soon as possible.
The developers will need access to a set of schemas so they can write the ALTER scripts. In the developers don't have local personal databases or private schemas to test things I suggest you let them have access to that toy database to test change scripts. Alternatively you can let them keep the application owner access, because you'll be repeating the Trash'n'Rebuild exercise on a regular basis. Once they get used to the idea that they will lose any changes they don't tell you about they will knuckle down and start Doing The Right Thing.
Last word
Obviously this is a lot of vague windbaggery, lacking in solid detail. But that's politics for you.
Postscript
I was at a UKOUG event yesterday, and attended a session by a couple of smart chaps from Regdate. They have a product Source Control for Oracle which provides an interface between (say) SVN and the database. It takes a rather different approach from what I outlined above. But their approach is a sound one. Their tool automates a lot of things, and I think it might help you a lot in your current situation. I must stress that I haven't actually used this product but I think you should check it out - there's a 28 day free trial. Of course, if you don't have any money to spend then this won't help you.
you can find the desierd infos in the following trigger attributes
dictionary_obj_name
dictionary_obj_owner
ora_sysevent
here is the simple ON DATABASE trigger
CREATE OR REPLACE TRIGGER trigger_name
AFTER CREATE OR DROP ON DATABASE
BEGIN
IF dictionary_obj_type = 'TRIGGER'
THEN
INSERT INTO log_table ( trg_name, trg_owner, trg_action) VALUES (dictionary_obj_name,dictionary_obj_owner, ora_sysevent);
END IF;
END;
/
In Java projects, JUnit tests do a setup, test, teardown. Even when mocking out a real db using an in-memory db, you usually rollback the transaction or drop the db from memory and recreate it between each test. This gives you test isolation since one test does not leave artifacts in an environment that could effect the next test. Each test starts out in a known state and cannot bleed over into another one.
Now I've got an Oracle db build that creates 1100 tables and 400K of code - a lot of pl/sql packages. I'd like to not only test the db install (full - create from scratch, partial - upgrade from a previous db, etc) and make sure all the tables, and other objects are in the state I expect after the install, but ALSO run tests on the pl/sql (I'm not sure how I'd do the former exactly - suggestions?).
I'd like this all to run from Jenkins for CI so that development errors are caught via regression testing.
Firstly, I have to use an enterprise version instead of XE because of XE doesn't support java SPs and a dependency on Oracle Web Flow. Even if I eliminate those dependencies, the build typically takes 1.5 hours just to load (full build).
So how do you acheive test isolation in this environment? Use transactions for each test and roll them back? OK, what about those pl/sql procedures that have commits in them?
I thought about backup and recovery to reset the db after each test, or recreate the entire db between each tests (too drastic). Both are impractical since it takes over an hour to install it. Doing so for each test is overkill and insane.
Is there a way to draw a line in the sand in the db schema(s) and then roll it back to that point in time? Sorta like a big 'undo' feature. Something besides expdp/impdp or rman. Perhaps the whole approach is off. Suggestions? How have others done this?
For CI or a small production upgrade window, the whold test suite has to run with in a reasonable time (30 mins would be ideal).
Are there products that might help acheive this 'undo' ability?
Kevin McCormack published an article on The Server Labs Blog about continuous integration testing for PL/SQL using Maven and Hudson. Check it out. The key ingredient for the testing component is Steven Feuerstein's utPlsql framework, which is an implementation of JUnit's concepts in PL/SQL.
The need to reset our test fixtures is one of the big issues with PL/SQL testing. One thing which helps is to observe good practice and avoid commits in stored procedures: transactional control should be restricted to only the outermost parts of the call stack. For those programs which simply must issue commits (perhaps implicitly because they execute DDL) there is always a test fixture which issues DELETE statements. Handling relational integrity makes those quite tricky to code.
An alternative approach is to use Data Pump. You appear to discard impdp but Oracle also provides PL/SQL API for it, DBMS_DATAPUMP. I suggest it here because it provides the ability to trash any existing data prior to running an import. So we can have an exported data set as our test fixture; to execute a SetUp is a matter of running a Data Pump job. You don't need do do anything in the TearDown, because that tidying up happens at the start of the SetUp.
In Oracle you can use Flashback Technology to restore the serve to a point back in time.
http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmflash.htm
1.5 hours seems like a very long time for 1100 tables and 400K of code. I obviously don't know the details of your envrionment, but based on my experience I bet you can shrink that to 5 to 10 minutes. Here are the two main installation script problems I've seen with Oracle:
1. Operations are broken into tiny pieces
The more steps you have the more overhead there will be. For example, you want to consolidate code like this as much as possible:
Replace:
create table x(a number, b number, c number);
alter table x modify a not null;
alter table x modify b not null;
alter table x modify c not null;
With:
create table x(a number not null, b number not null, c number not null);
Replace:
insert into x values (1,2,3);
insert into x values (4,5,6);
insert into x values (7,8,9);
With:
insert into x
select 1,2,3 from dual union all
select 4,5,6 from dual union all
select 7,8,9 from dual;
This is especially true if you run your script and your database in different locations. That tiny network lag starts to matter when you multiply it by 10,000. Every Oracle SQL tool I know of will send one command at a time.
2. Developers have to share a database
This is more of a long-term process solution than a technical fix, but you have to start sometime. Most places that use Oracle only have it installed on a few servers. Then it becomes a scarce resource that must be carefully managed. People fight over it, roles are unclear, and things don't get fixed.
If that's your environment, stop the madness and install Oracle on every laptop right now. Spend a few hundred dollars and give everyone personal edition (which has the same features as Enterprise Edition). Give everyone the tools they need and continous improvment will eventually fix your problems.
Also, for a schema "undo", you may want to look into transportable tablespaces. I've never used it, but supposedly it's a much faster way of installing a system - just copy and paste files instead of importing. Similiarly, perhaps some type of virtualization can help - create a snapshot of the OS and database.
Although Oracle Flashback is an Enterprise Edition feature the technology it is based on is available in all editions namely Oracle Log Miner:
http://docs.oracle.com/cd/B28359_01/server.111/b28319/logminer.htm#i1016535
I would be interested to know whether anybody has used this to provide test isolation for functional tests i.e. querying v$LOGMNR_CONTENTS to get a list of UNDO statements from a point of time corresponding to the beginning of the test.
The database needs to be in archive mode and in the junit test case a method annotated with
#Startup
would call DBMS_LOGMNR.START_LOGMNR. The test would run and then in a method annotated with
#Teardown
would be query v$LOGMNR_CONTENTS to find the list of UNDO statements. These would then be executed via JDBC. In fact the querying and execution of the UNDO statements could be extracted into a PLSQL stored procedure. The order that the statements executed would have to be considered.
I think this has the benefit allowing the transaction to commit which is where an awful lot of bugs can creep in i.e. referential integrity, primary key violations etc.