Unit Testing DDL with SQL Developer 3.1 - oracle

SQL Developer supports unit testing of DML but I've not found a way to create unit tests for DDL. What would be a good approach to this problem? The schema I'm starting with is small, less than a dozen tables with larger projects on the horizon. Google isn't returning much to the application of unit tests to DDL. Any ideas on an approach to testing DDL or other tools that exist for unit testing DDL?

What do you want to test about DDL? Either the table is created as defined or it is not.
What you could do is write a series of tests that queries the Data Dictionary to ensure the tables are present, have the columns with the sizes and datatype you want etc. This would be more of a schema verification script than unit tests however, and I am not sure how valuable it would be.
If you maintain a schema build script (or a series of migrations to add new objects to add objects to your schema), then if it applies without errors you know the schema has been created as it was defined.
Then if you have stored procedures, some of them will fail to compile if the schema is not 100% correct. Getting the procedures in cleanly would be another verification step for the schema.
Finally, the unit tests that you write to test the DML and stored procedures will verify that the correct data goes into the correct tables.
You might want some tests to ensure that a table can only accept certain values or columns can be unique etc (ie test the constraints are correct) but that would be down to standard unit tests too.
I am a big believer in writing unit tests for DB code, but I don't like SQL Developers GUI approach of doing it. Right now I am writing tests for an application, but I am coding the tests in Ruby and it seems to be working well. It will also be easily built into our build and automated test process.
Another alternative is UT_PLSQL which I have used before, however simply due to the nature of PLSQL is makes the tests very verbose, which is why I decided to use Ruby for my current project.

I know this is an older question, but I've recently been working to solve the same problem. I think it's useful to define tests for DDL prior to creating objects and then creating those objects to pass those tests.
I've done some of this using an assert "pattern" -- i.e., tdd.ddlunit.assert_tableexists(p_schema_name, p_table_name) which raises an exception if the table doesn't exist, and silently runs when it does.
Other assertions I've created are for things like making sure all varchar2 columns use character semantics instead of byte length semantics, and making sure all tables and columns are commented.
These get checked in to the code repository and can be run via continuous integration frameworks to make sure we have a valid database per what we expect.

Related

Difference between truncation, transaction and deletion database strategies

What is the difference between truncation, transaction and deletion database strategies when using Rspec? I can't find any resources explaining this. I read the Database Cleaner readme but it doesn't explain what each of these do.
Why do we have to use truncation strategy for Capybara? Do I have to clean up my database when testing or can I disable it. I dont understand why I should clean up my database after each test case, wouldn't it just slow down testing?
The database cleaning strategies refer to database terminology. I.e. those terms come from the (SQL) database world, so people generally familiar with database terminology will know what they mean.
The examples below refer to SQL definitions. DatabaseCleaner however supports other non-SQL types of databases too, but generally the definitions will be the same or similar.
Deletion
This means the database tables are cleaned using the SQL DELETE FROM statement. This is usually slower than truncation, but may have other advantages instead.
Truncation
This means the database tables are cleaned using the TRUNCATE TABLE statement. This will simply empty the table immediately, without deleting the table structure itself or deleting records individually.
Transaction
This means using BEGIN TRANSACTION statements coupled with ROLLBACK to roll back a sequence of previous database operations. Think of it as an "undo button" for databases. I would think this is the most frequently used cleaning method, and probably the fastest since changes need not be directly committed to the DB.
Example discussion: Rspec, Cucumber: best speed database clean strategy
Reason for truncation strategy with Capybara
The best explanation was found in the Capybara docs themselves:
# Transactional fixtures do not work with Selenium tests, because Capybara
# uses a separate server thread, which the transactions would be hidden
# from. We hence use DatabaseCleaner to truncate our test database.
Cleaning requirements
You do not necessarily have to clean your database after each test case. However you need to be aware of side effects this could have. I.e. if you create, modify, or delete some records in one step, will the other steps be affected by this?
Normally RSpec runs with transactional fixtures turned on, so you will never notice this when running RSpec - it will simply keep the database automatically clean for you:
https://www.relishapp.com/rspec/rspec-rails/v/2-10/docs/transactions

oracle rollback changes through several transactions

Is it possible to track changes made in session transactions? I need somehow to track all changes that are made in the my session. That is necessary for testing purpose - after test is finished I need to remove all changes made during this test, so I will be able to run this test again without changes.
You have several options to deal with this situation - since you don't provide much detail I can only can give some general pointers:
temporary tables (session-specific vsersus global, you can decide to preserve or automatically throw away) see http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tables003.htm
Flashback area - this one can rollback the whole DB to a specific point in time and thus reverse all change across several transactions see http://www.oracle.com/technetwork/database/features/availability/flashback-overview-082751.html
create "prepare" scripts for your test scenarios which reset the DB to a known state before every test
There are many things you can do with oracle as an administrator, especially if your test database is on a filesystem that supports snapshots.
However, if you're looking at this from a unit test perspective purely as a developer, the safest/cleanest way to handle something like this is to:
truncate the tables involved in the test
load the fixture/test/known state data
run your tests

How to achieve test isolation testing Oracle PL/SQL?

In Java projects, JUnit tests do a setup, test, teardown. Even when mocking out a real db using an in-memory db, you usually rollback the transaction or drop the db from memory and recreate it between each test. This gives you test isolation since one test does not leave artifacts in an environment that could effect the next test. Each test starts out in a known state and cannot bleed over into another one.
Now I've got an Oracle db build that creates 1100 tables and 400K of code - a lot of pl/sql packages. I'd like to not only test the db install (full - create from scratch, partial - upgrade from a previous db, etc) and make sure all the tables, and other objects are in the state I expect after the install, but ALSO run tests on the pl/sql (I'm not sure how I'd do the former exactly - suggestions?).
I'd like this all to run from Jenkins for CI so that development errors are caught via regression testing.
Firstly, I have to use an enterprise version instead of XE because of XE doesn't support java SPs and a dependency on Oracle Web Flow. Even if I eliminate those dependencies, the build typically takes 1.5 hours just to load (full build).
So how do you acheive test isolation in this environment? Use transactions for each test and roll them back? OK, what about those pl/sql procedures that have commits in them?
I thought about backup and recovery to reset the db after each test, or recreate the entire db between each tests (too drastic). Both are impractical since it takes over an hour to install it. Doing so for each test is overkill and insane.
Is there a way to draw a line in the sand in the db schema(s) and then roll it back to that point in time? Sorta like a big 'undo' feature. Something besides expdp/impdp or rman. Perhaps the whole approach is off. Suggestions? How have others done this?
For CI or a small production upgrade window, the whold test suite has to run with in a reasonable time (30 mins would be ideal).
Are there products that might help acheive this 'undo' ability?
Kevin McCormack published an article on The Server Labs Blog about continuous integration testing for PL/SQL using Maven and Hudson. Check it out. The key ingredient for the testing component is Steven Feuerstein's utPlsql framework, which is an implementation of JUnit's concepts in PL/SQL.
The need to reset our test fixtures is one of the big issues with PL/SQL testing. One thing which helps is to observe good practice and avoid commits in stored procedures: transactional control should be restricted to only the outermost parts of the call stack. For those programs which simply must issue commits (perhaps implicitly because they execute DDL) there is always a test fixture which issues DELETE statements. Handling relational integrity makes those quite tricky to code.
An alternative approach is to use Data Pump. You appear to discard impdp but Oracle also provides PL/SQL API for it, DBMS_DATAPUMP. I suggest it here because it provides the ability to trash any existing data prior to running an import. So we can have an exported data set as our test fixture; to execute a SetUp is a matter of running a Data Pump job. You don't need do do anything in the TearDown, because that tidying up happens at the start of the SetUp.
In Oracle you can use Flashback Technology to restore the serve to a point back in time.
http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmflash.htm
1.5 hours seems like a very long time for 1100 tables and 400K of code. I obviously don't know the details of your envrionment, but based on my experience I bet you can shrink that to 5 to 10 minutes. Here are the two main installation script problems I've seen with Oracle:
1. Operations are broken into tiny pieces
The more steps you have the more overhead there will be. For example, you want to consolidate code like this as much as possible:
Replace:
create table x(a number, b number, c number);
alter table x modify a not null;
alter table x modify b not null;
alter table x modify c not null;
With:
create table x(a number not null, b number not null, c number not null);
Replace:
insert into x values (1,2,3);
insert into x values (4,5,6);
insert into x values (7,8,9);
With:
insert into x
select 1,2,3 from dual union all
select 4,5,6 from dual union all
select 7,8,9 from dual;
This is especially true if you run your script and your database in different locations. That tiny network lag starts to matter when you multiply it by 10,000. Every Oracle SQL tool I know of will send one command at a time.
2. Developers have to share a database
This is more of a long-term process solution than a technical fix, but you have to start sometime. Most places that use Oracle only have it installed on a few servers. Then it becomes a scarce resource that must be carefully managed. People fight over it, roles are unclear, and things don't get fixed.
If that's your environment, stop the madness and install Oracle on every laptop right now. Spend a few hundred dollars and give everyone personal edition (which has the same features as Enterprise Edition). Give everyone the tools they need and continous improvment will eventually fix your problems.
Also, for a schema "undo", you may want to look into transportable tablespaces. I've never used it, but supposedly it's a much faster way of installing a system - just copy and paste files instead of importing. Similiarly, perhaps some type of virtualization can help - create a snapshot of the OS and database.
Although Oracle Flashback is an Enterprise Edition feature the technology it is based on is available in all editions namely Oracle Log Miner:
http://docs.oracle.com/cd/B28359_01/server.111/b28319/logminer.htm#i1016535
I would be interested to know whether anybody has used this to provide test isolation for functional tests i.e. querying v$LOGMNR_CONTENTS to get a list of UNDO statements from a point of time corresponding to the beginning of the test.
The database needs to be in archive mode and in the junit test case a method annotated with
#Startup
would call DBMS_LOGMNR.START_LOGMNR. The test would run and then in a method annotated with
#Teardown
would be query v$LOGMNR_CONTENTS to find the list of UNDO statements. These would then be executed via JDBC. In fact the querying and execution of the UNDO statements could be extracted into a PLSQL stored procedure. The order that the statements executed would have to be considered.
I think this has the benefit allowing the transaction to commit which is where an awful lot of bugs can creep in i.e. referential integrity, primary key violations etc.

converting J2EE App from Sql to Oracle - suggestions with effecient approach

We have a J2EE app built on Struts2+spring+iBatis; not all DAO's use iBatis...some code still uses the old JDBC approach of interacting with Database. All our DAO's call Stored Procedures, we do not have any inline SQL. Since Oracle Stored Procedures return cursors, we have to drastically change our code.
It is fairly easy for us to convert current iBatis mappings (in sql) to oracle (used a groovy script to do this) also it is easy to convert Java code that was calling old mappings that were in sql.
Our problem is to convert the old DAO's that still use JDBC approach. Since we will have to modify them anyways (because we are now using oracle) we are thinking about converting them to iBatis mappings. is this a good approach? This will be a huge effort from our side...
what do you think will be the best approach to tackle this huge effort?
should we just get to work and start converting each method in every DAO
should we try to make some small script that looks at each method, parses out relevant information and makes iBatis mappings from that.
for maintenance and seperation purpose should we have 1 iBatis mapping for each DAO
I appologize if the question is vague but am just looking for someone who has gone through this type of thing before and has some pointers or 'lessons learned'.
The first thing you should do is cover your DAO layer in tests. This way you'll know if you broke something during the conversion. If you are moving a stored procedure from one DBMS to Oracle, you should also write tests for that using a framework like DbUnit.
You should have a TEST DB instance populated with sample data that doesn't change. You should be able to refresh this DB with the same set of sample data after your are done running your tests. This will ensure your TEST DB is in a known state. You will then have your input parameters paired with some expected (correct) result. Your test will read in these pairs and execute them against the test DB instance and confirm the expected result is returned. Assuming your tests mutate the DB, you'll want to refresh the DB between runs of your test suite.
Second, if you're already going in and changing some data access implementations for Oracle, why not use this as an opportunity to move some of that business logic out of the DB and into Java? There are many well-documented problems with maintaining large codebases in a DBMS.
should we try to make some small script that looks at each method, parses out relevant information and makes iBatis mappings from that.
I don't recommend this. The time you'd spend tweaking the script for each special case, plus hunting down all the bugs it would introduce would be better spent doing the conversion by a thinking human.
for maintenance and seperation purpose should we have 1 iBatis mapping for each DAO
That's a fine idea. You can then combine them in your sqlMapConfig with
<sqlMap resource="sqlMaps/XXX.xml" />
This will keep your mappings more manageable. Just make sure to specify the namespace attribute in each sqlMap like:
<sqlMap namespace="User">
So that you can reuse mappings between the sqlMaps for instantiating object graphs (example: when loading a User and his Permissions, the User.xml sqlMap calls the Permission.xml mapping).
All our DAO's call Stored Procedures
I don't see what iBatis is buying you here.
It's also not clear what the migration is. Are you saying that you've decided to move all the code into stored procedures, so there's no more in-line SQL? If that's the case, I'd say don't use iBatis. If you're already using Spring, let it call into Oracle using its StoredProcedure object and map the cursors into objects.
The recommendation to create JUnit or, better yet, TestNG tests is spot on. Do that before changing anything.

Write stored procedures in Linq/Lambda (Unit-testable but performant stored procs). Possible?

I am creating a data source for reporting model (SQL Server Reporting Services).
The reports requires a lot of joins and calculations (let's say, calculating financial parameters like money spent on this, that, amount A vs amount B)...all this involves subobjects.
It makes a lot of sense to me to write unit tests for this code (i.e. walking through order collection, aggregating info based on business rules and subobjects etc.)
To do this properly, I would expect my code to look approx. like this
foreach (IOrder in Orders)
foreach (IOrderLine in IOrder.Orderlines)
...
return ...
and then test the return value.
But this code is not the SQL which is going to be used in the reporting view...of course...
So I am thinking, I could plug-in a .NET assembly in the database.
The issue here is, of course, performance...I don't want to loop all these objects in C#...too slow.
So, naturally, Linq/Lambda/Expression trees seem to be the answer to me.
As we know, when you are doing Linq to SQL, expression trees are built, and then proper SQL is generated based on them.
So, I could write my code in Linq to Objects, using lambda expressions, unit test this code on sample collections (having expressions compiled to .net), and reuse the same code as Linq to SQL in the DB stored procedure, so that inside SQL Server it would generate proper SQL for me (as Linq to SQL already does)...
Then I could get benefits of both unit-tests and writing domain logic code in C# and high-performing stored procedures for reports.
Possible? Can I use Linq/Lambda in SQL Server CLR Stored procedures? Anyone did it or knows how to make it work?
Am I crazy? Do you know a better way of doing it?
Thanks
P.S. I think now I figured out how this should be done properly. According to Udi Dahan, if I understand him right. Database should be denormalized, and all the calculated fields should be on the objects in the table.
When something is happening on the subobject (OrderLine added), my Customer object should receive an event and recalculate the smart value (cache it and persist).
Then reports go straight-forward, without logic and work fast...
No, you cannot use LINQ/Lambda in SQL CRL Procs - it is based on a different version of .NET and does not support those namespaces.
So, I could write my code in Linq to
Objects, using lambda expressions,
unit test this code on sample
collections (having expressions
compiled to .net), and reuse the same
code as Linq to SQL in the DB stored
procedure, so that inside SQL Server
it would generate proper SQL for me
(as Linq to SQL already does)...
This plan was fine until you suggested the CLR code be called from your stored procs. Running CLR code from the database process itself creates a lot of problems with regards to versioning, configuration and database stability... Too many problems if you do that.
Your motivation was to have the benefit of using stored procs, which are faster in general. If those stored procs are in turn running CLR code, they're not going to be faster than the CLR code running in the local process.
Using the LINQ generated expressions technically consumes more CPU cycles than stored procs. This is because the database engine has to regenerate the execution plan each time a query is ran. Typically your database server is on a separate machine though that is not CPU bound (it will be limited instead by disk or network capacity), so this is not a real performance issue. It could be if you run the database server on the same machine as everything else, but don't try to fix this with something so convoluted until its a real issue.
Udi's suggestion may be appropriate, if you want to decrease the overhead of generating the reports. There are two important side effects though to consider first. First, can afford to increase the performance overhead of the operations that pregenerate the reported fields? A bigger problem is that it couples your reporting logic with the code that runs the target system. This prevents you from being able to update the reporting code without also updating the business code, and presumes the reporting code being running as soon as the reported code is put into production.

Resources