Encrypt decrypt PL SQL Packages in oracle - oracle

Is there any way to encrypt the SQL Packages (Stored procedure, Functions etc) in oracle using specific key and decrypt using the same key for security purpose???
i am using oracle 12c....
Thanks,

Yes, you can use the wrap tool for this, see PL/SQL Source Text Wrapping
You can wrap the PL/SQL source text, thereby preventing anyone from displaying that text with the static data dictionary views *_SOURCE.
In principle decryption is not supported, however you can use tools like Unwrap It!.

I like Wernfried's answer. I would like to expand a little if I may.
Assuming you will try the WRAP method (some say easily cracked), here are things to try to check it's effectiveness:
Check with tkprof if executing the WRAPd objects reveals any logic or SQL
Check the V$SQL and related views to see if executing the WRAPd objects reveals any logic or SQL
Check with OEM to see if executing the WRAPd objects reveals any logic or SQL
I've never tested the above, but if I was considering using WRAP I would do so that I know the limits of whatever protection it gives. Given the importance of DBAs being able to monitor and tune queries, I would be surprised if the SQL at least was not ascertainable via standard performance views/queries.
Also note, if your code has any dynamic SQL or PLSQL, this could show up in the symbol table of the wrapped file (as hex codes).
I think you cannot wrap TRIGGER code (though, you could of course have triggers call PLSQL procedures that are wrapped).
If you are a software vendor concerned about customer DBA poking around your code, given the above point about SQL monitoring, and assuming the DBA have access to a suitable cracker I don't think there is much you can do to stop them them. You might be able to prevent non-DBA users from doing this by restricting access to DBA_SOURCE, V$SQL and related views.
Out of interest I had a look at Native Compilation of PLSQL, to see if the compiled M-code could be shipped instead of the PLSQL source, but it seems that you have to ship the PLSQL code too.
What other options are there? Can the PLSQL be hived off to a remote protected database and consumed via REST API calls? That depends heavily on application logic and data access, and if having such a service on a protected remote server is feasible.
The one last thing I'd consider if it was absolutely necessary to hide the logic would be to implement as ProC/C/Java modules to create EXTERNAL LIBRARIES. But note here as well, SQL calls within the ProC/C/Java will still be exposed in V$SQL. Plus, there aren't many around who write Pro*C anymore.

Using wrap functionality is only obfuscation, not encryption. It is easily undone with a variety of available web sites, python scripts, and other PL/SQL procedures. There are additional obfuscations that you can do to make interpretation of your code more difficult (Oracle's SQL Developer has some built-in functions for this) so that even if your code is unwrapped it is still difficult to read. Here's an example of one other custom obfuscation toolkit, too: https://pmdba.wordpress.com/2020/02/24/code-obfuscation-toolkit/. For a commercial-grade toolkit, check this out: http://www.petefinnigan.com/products/pfclobfuscate.htm

Related

POSTing XML from Oracle to WebAPI's

Our company is in the process of creating an ASP.NET service to accept XML data sent from ERP systems such as Oracle. We have no experience (at all) with Oracle, so please excuse the simplicity of this question.
I see online that Oracle has a tool called JDeveloper that can hook up to WCF Services that use a DataContract/WSDL to send/receive data with relative ease.
Can anyone advise about the situation regarding WebAPI's, where no WSDL or DataContracts exist? Is it simple to craft a POST in Oracle to send to a WebAPI, or is the former option better/easier to work with?
Thanks in advance.
It's simple enough to call web service directly from Oracle:
There are a good support of XML/XSLT/XQuery to construct requests and parse responses (XML DB)
Oracle have an API to work with HTTP/HTTPS requests (UTL_HTTP package).
So if you decide to call web service from Oracle - it's possible and relatively simple for SOAP and REST web-services.
You can find example code in this answer on StackOverflow.
Update - answer on a comment
To make it clear, example above isn't work at a "database query level" because it's implemented on PL/SQL. Oracle Database engine natively incorporates support for two different languages:
traditional SQL, which is ANSI SQL standards implementation (traditionally with some incompatibilities and extensions);
PL/SQL, which is procedural programming language tightly integrated with a traditional SQL.
This two things are really different. Even there are a common questions about performance affected by switching context between SQL and PL/SQL engines and mostly caused by improper procedures design.
PL/SQL as a procedural language can access a rich set of APIs, provided by Oracle as a set of built-in packages. Among others there are a number of packages directly related to network communication protocols and standards: UTL_TCP, UTL_URL, UTL_SMTP, UTL_MAIL, UTL_INADDR, UTL_HTTP, HTP, HTF, DBMS_LDAP.
Needs to be said, that there are a set of APIs, provided to support publishing of PL/SQL code on the web. Set of OWA_xxxx packages supports access through mod_plsql. Another thing is a support for publishing SOAP web services in Oracle XML DB.
If you need to unload data from Oracle to web service on a schedule then look at DBMS_SCHEDULE and DBMS_JOB packages to start unload procedures periodically.
Most of this system packages implemented on Java and it's possible to write your own Java extensions callable from PL/SQL.
P.S. There are a UTL_DBWS package dedicated for implementation of calls to SOAP services from Oracle Database, but seems that it produces more problems than solves and I can't find reference to it in 11g documentation (10g only).
P.P.S. Some statements may be slightly inaccurate or contain exaggerations, but that should be enough to understand the overall picture.

Does using the Oracle OCI API require C programming?

I was looking here:
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28395/toc.htm
but everything looks like C or C++. Can I use any other language to use the OCI?
Thanks.
EDIT: I need to use direct path for LOB object (blob, clob, etc.) I believe I have to use the OCI to do that.
EDIT: I base my OCI assumption on this: Can a direct path insert into a LOB column?
According to Oracle
"Oracle Call Interface (OCI) is the most comprehensive, high
performance, native 'C' language based interface to the Oracle
Database that exposes the full power of the Oracle Database."
However, there are different ways to work with an Oracle database. What sort of language do you want to use, and what do you actually want to achieve?
If you want to use Java, you can use JDBC OCI. I believe that there are also ways to acess OCI through Perl, Python, and Ruby, if you want (though I've never used them).
Theoretically, every language that can call standard C functions should be able to use OCI. This includes languages such as C++ and Delphi, but also includes managed languages such as C# (that can access these functions through P/Invoke) or Java (with Java Native Interface).
However, if your goal is simply to access the Oracle, but don't care to do it specifically through OCI, it is much better to use whatever library is specifically geared towards your language of choice. For example, use ADO.NET under C# or JDBC under Java.
Most of these libraries use OCI internally anyway (with notable exception of some direct-to-wire ADO.NET and JDBC drivers).
You will find that most Oracle APIs in other languages are really bindings to the OCI using whatever mechanism that language normally uses to interoperate with C libraries. Examples include cx_Oracle for Python, OCI*ML for OCaml and Oratcl. These typically abstract the OCI which is very low level, into something easier to use from a high-level langauage (e.g. connecting to a database is one line in those langauges, but it is a page of code in OCI as everything must be set up explicitly).

Reporting vs. Coding - thoughts?

Recently I had a project in which I had to get some data from particular software system to a portlet. The software used a database, and I spent a fair bit of time modeling the data I wanted and then creating a web service so that my portlet could grab the information.
Then it suddenly struck me that I was wasting my time. I grabbed BIRT, tossed it into a portlet, and then just wrote some reports that directly grabbed the necessary data from the database. I was done in an afternoon.
I understand that reporting is a one way street, but this got me thinking. Reporting tools can be very effective for creating reports (duh) from your actual data, but when you're doing this you're bypassing your model which except in simple cases is not a direct representation of your data as it exists in your database.
If you're writing a data-intensive application and require the ability to perform non-trivial reporting, do you bypass your application and use something like BIRT or Crystal Reports? How do you manage these tools as part of your overall process? Do you consider the reports you write as being part of your application and treat them as such? A report is a view and a model and a controller (if you will) all in one big mess, how do you deal with and interpret and plan for that?
Revised question: it's possible and even common that a report will perform some business calculations that in a perfect world you would like to have contained in your application. This can lead to a mismatch of information given back to the user. On the other hand, reporting tools make it so easy to gather and display information that it's hard to take a purist's approach and do everything from within the application. Are there any good techniques for ensuring that the data in your reports matches the data that you might be showing in the regular GUI?
I see reporting as simply another view on the data, not a view/model/controller in one (well, maybe a view and controller in one).
We have our reports (built in sql 2008 reporting services) consume a service in our application layer to get data (keeping with our standard, that data access is in a repository). These functions could do a simple query or handle very complex processing that would be a nightmare in your reporting evironment or a stored procedure. In practice, we find this takes no longer than coding up some one-off stored procedure that will, as your system grows and grows, become a nightmare to maintain.
Treating reporting as simply a one-off or not integrating into your application design is a huge mistake.
Reporting is crucial. Reporting is mostly crucial to share values collected in one system to external users, e.g. users not directly using the system (eg management for sales figures). So reporting is a lot more than just displaying facts and figures and is something central to almost every system that drives a commercial.
At least the more advanced systems allow you to enhance them: with your own reusable "controls". Even a way back can be implemented - if you just use the correct plugins. Once I wrote a system to send emails out of a report, because the system did not allow for change. It worked - though it was not meant to be used that way ;)
Reports make a good part of the application, and you gain a lot freedom if you make reports changeable for your customers. Sometimes you come up with more possibilities than you thought of when you built the system in the first place.
So yes, for me reporting is part of the system.
Reports are part of your app but because they are generally something a user will have strong ideas about than, say, your data capture UI, I'd sacrifice purity for convenience/speed of delivery and get back to "real" coding... :-)
As soon as you've done a report, users want another one or change the colour or optional grouping or more filtering or... something that takes you away from whizzier stuff... so I don't bust a gut maintaining purity.
This is a fine line indeed. You don't want to spend too much time building reports (that users want you to change all the time anyway) but you don't want to duplicate logic by putting business logic into your reports! With our reporting products at Data Dynamimcs I think we have reached a happy medium between these two tradeoffs.
By using the ObjectDataProvider (see links below for more info) you can bind the report directly to business objects (plain old objects) so you don't have to bypass your business layer for getting data. At the same time we provide a way to reference and use functions from other libraries in your report. This way if you have some code configured already to do some business logic calculations you can reuse those functions directly within your report. You can see an example of this in the links below too.
Binding to Objects for your Data (see "Object Provider" section): http://www.datadynamics.com/Help/ddReports/ddrconDataSetAndObjectDataSource.html
Adding Custom Code to your reports Walkthrough: http://www.datadynamics.com/Help/ddReports/ddrwlkCustomCode.html
Using Custom Assemblies (referencing shared libraries/dlls from your report): http://www.datadynamics.com/Help/ddReports/ddrconCustomCode.html, and http://www.datadynamics.com/Help/ddReports/ddrtskCreatingAnInstanceMethod.html
Scott Willeke
Data Dynamics / GrapeCity
The way I've always worked with reports is to consider part reports as part of the code-base, and stored in the source along with the application. In some contexts, reports are more important than the application, in that management makes business decisions off of report data, having the wrong information can cause them to cancel a product line, cancel a campaign, or fire a sales person. Obviously, this depends highly on your management and your application.
Regarding keeping your model consistent, this is a bit trickier question. One way to ensure consistent model between reports and your application is to use stored procedures (or views) to retrieve data, depending on your application's architecture.

How do I deploy an Oracle database?

I have an ASP .NET application that connects to an Oracle or a SQL Server database. An installer has been developed to install a fresh database to an existing SQL Server using sql commands such as "restore database..." which simply restores a ".bak" file which we keep under source control.
I'm very new to Oracle and our application has only recently been ported to be compatible with 10g.
We are currently using the "exp.exe" tool to generate a ".dmp" file and then using the "imp.exe" to import it into a developers box.
How would you go about creating an "Oracle Database Installer"?
Would you create the database using script files and then populate the database with required default data?
Would you run the "imp.exe" tool behind the scenes?
Do we need to provide a clean interface for system administrators so that they can just select the destination server and have done, or should we just provide them with the ".dmp" file? What are the best practices?
Thanks.
The question is -- what do your customers know about Oracle?
Nothing? You should probably rethink this position. Oracle is very large and complex. If you assume your customers know nothing, you'll then start providing tutorials and help that's inappropriate.
Minimally Competent? If they're competent, they know enough to run imp by themselves. Also, they know enough to run a script that executes SQL.
Actual DBA's? Most organizations that can afford Oracle can afford real DBA's. Real DBA's can cope with a lot of things -- they do not need much hand-holding. Some of them like to assign storage parameters according to their shop standards.
You should provide a script with reasonable defaults. You should define your script in a way that someone can easily find all of your storage parameters and tweak them if necessary.
Your initial data can be via export/import or via a script. I prefer a script.
I have done this repeatedly from both sides (consumer and provider) as a DBA, developer, and architect.
As a provider, one of my grand accomplishments (in 1996) was the creation of an installation CD for a commercial insurance claims management software product targeted to the largest insurance carriers (a multi-million dollar item). That installation CD installed the Oracle 7.2 RDBMS engine, the FileNet optical storage system (scans paper documents and creates cataloged binary versions), and our custom claim-processing application (built in VB 4.0), all integrated and ready to run. As part of the installation process, the user could skip the Oracle software installation or customize it, and the user could customize/override the database configuration in all of its major details (database, schemas, tablespaces, sizes, disks, etc.).
I also provided the field service for this product, which included traveling to the client site as necessary. I tested the installation CD literally hundreds of times under every imaginable scenario that I could replicate, and we NEVER had a field failure that required even a phone call, let alone a trip (I did travel on four occasions, but for pre-sales stuff instead).
More recently (2007), I scripted the creation of an Oracle 10g database for an internal system at a megacorp. In production, the database was sized at 8 TB, mostly for a single transaction table with high data volume. In test, the database was sized around 1 TB for a modest server. In development, the database was sized around 100 MB to run on my laptop. The EXACT SAME SCRIPTS created all three environments, and I could extend them to handle a new environment/machine in about five minutes. This database involved extreme performance tuning, so customization of all pertinent characteristics was absolutely crucial.
Back to the insurance claims processing product--let me please add that I was originally hired to lead its conversion from a SQL Server database to an Oracle database. That conversion was identified as a business necessity because most potential clients did not view a SQL-Server-based product as a professional, serious solution. That is not quite as common today, but it still applies in general: a software product has a better chance of market penetration if it can accommodate multiple database options as preferred by the target customers (especially enterprise-class customers).
Likewise, the installation CD was also viewed as an essential element. However, that situation and many more have revealed to me that most "real" DBAs will not accept an import-based database installation. As a DBA and architect, I know that I definitely will not for the same reasons.
Simply put, an import-based database installation gives the customer almost no control over the resulting database. It is opaque to the customer, leaving them questioning what it did. It forces the customer to expend massive efforts to attempt to exercise what little control they can. It is notoriously fragile and error-prone (Oracle imports are well known for ownership and permission problems, constraint problems, etc.). Weighing all those impacts, an import-based database installation is unprofessional--it does not put the customers' needs first.
Scripting the database installation provides the right kind of transparency, configurability, selective repeatability, and overall customer control that professionalism demands. It also encourages you to properly understand the impacts of your database design decisions in a way that an import does not.
Best wishes.
Personally I favour SQL scripts to database creation and data loads where possible. I tend to use PL/SQL Developer. It has some good options to generate scripts from an existing database. Once you have these you can run the scripts using sqlplus or any application code that can execute arbitrary SQL (eg JDBC with Java). Toad is the more common (and more expensive) tool for Oracle development.
The only limitation of a SQL export is it can't export CLOB/BLOB fields. If you have those, you either need to do them separately (as a PL/SQL export) or do the whole thing as a PL/SQL export. Theres no dramas with this except the file is effectively a binary export (extension .pde) and is more limited in how you can execute it.
The other big advantage of SQL source files is they can be version controlled easily. It's really handy to be able to create a database environment by running one or two scripts.
The import and export tools for Oracle I think are more applicable for backup and restore operations.
Now, as for delivering that to a customer, from your comments it seems that you'll be giving this to DBAs. Pretty much any Oracle installation will have DBAs involved. They will be fine with SQL scripts to create the schema and do the data load. They will be doing a lot of site-specific configuration (eg tuning the SGA, temp tablespaces, # of concurrent connections, etc based on expected load).
You, as the vendor, can give guidance on any relevant configuration and you may get involved in support and possibly installation but ultimately it's up to them to figure out what works for them. Oracle runs on a large number of operating systems and hardware variants with infinite variations in network topology and firewall configuraiton. You can't factor in all of these to an installer or even a set of instructions (other than the guidelines mentioned previously).
The last time I was involved in the creation of a (oracle) db (for a reasonably large company with in-house DBAs) the DBAs wanted to know things like:
what we wanted to call the db,
what tablespaces we would need, and an estimate of how much data would be in each one
how many users would be connecting.
(From memory) they set up the db and tablespaces, then we provided a combination of simple scripts that they could run (or clear instructions if a task wasn't easy to automate)
As I say this was for an in-house app, so your mileage may vary, but in my case they wanted all instructions clearly spelt out so that (a) there was no possibily of a misunderstanding leading to the wrong thing being done, and (b) no culpability on their part if something didn't work ("we were just following the instructions")

How do you manage schema upgrades to a production database?

This seems to be an overlooked area that could really use some insight. What are your best practices for:
making an upgrade procedure
backing out in case of errors
syncing code and database changes
testing prior to deployment
mechanics of modifying the table
etc...
Liquibase
liquibase.org:
it understands hibernate definitions.
it generates better schema update sql than hibernate
it logs which upgrades have been made to a database
it handles two-step changes (i.e. delete a column "foo" and then rename a different column to "foo")
it handles the concept of conditional upgrades
the developer actually listens to the community (with hibernate if you are not in the "in" crowd or a newbie -- you are basically ignored.)
http://www.liquibase.org
opinion
the application should never handle a schema update. This is a disaster waiting to happen. Data outlasts the applications and as soon as multiple applications try to work with the same data ( the production app + a reporting app for example) -- chances are they will both use the same underlying company libraries... and then both programs decide to do their own db upgrade ... have fun with that mess.
I am a big fan of Red Gate products that help creating SQL packages to update database schemas. The database scripts can be added to source control to help with versioning and rollback.
In general my rule is: "The application should manage it's own schema."
This means schema upgrade scripts are part of any upgrade package for the application and run automatically when the application starts. In case of errors the application fails to start and the upgrade script transaction is not committed. The downside to this is that the application has to have full modification access to the schema (this annoys DBAs).
I've had great success using Hibernates SchemaUpdate feature to manage the table structures. Leaving the upgrade scripts to only handle actual data initialization and occasional removing of columns (SchemaUpdate doesn't do that).
Regarding testing, since the upgrades are part of the application, testing them becomes part of the test cycle for the application.
Afterthought: Taking on board some of the criticism in other posts here, note the rule says "it's own". It only really applies where the application owns the schema as is generally the case with software sold as a product. If your software is sharing a database with other software, use other methods.
That's a great question. ( There is a high chance this is going to end up a normalised versus denormalised database debate..which I am not going to start... okay now for some input.)
some off the top of my head things I have done (will add more when I have some more time or need a break)
client design - this is where the VB method of inline sql (even with prepared statements) gets you into trouble. You can spend AGES just finding those statements. If you use something like Hibernate and put as much SQL into named queries you have a single place for most of the sql (nothing worse than trying to test sql that is inside of some IF statement and you just don't hit the "trigger" criteria in your testing for that IF statement). Prior to using hibernate (or other orms') when I would do SQL directly in JDBC or ODBC I would put all the sql statements as either public fields of an object (with a naming convention) or in a property file (also with a naming convention for the values say PREP_STMT_xxxx. And use either reflection or iterate over the values at startup in a) test cases b) startup of the application (some rdbms allow you to pre-compile with prepared statements before execution, so on startup post login I would pre-compile the prep-stmts at startup to make the application self testing. Even for 100's of statements on a good rdbms thats only a few seconds. and only once. And it has saved my butt a lot. On one project the DBA's wouldn't communicate (a different team, in a different country) and the schema seemed to change NIGHTLY, for no reason. And each morning we got a list of exactly where it broke the application, on startup.
If you need adhoc functionality , put it in a well named class (ie. again a naming convention helps with auto mated testing) that acts as some sort of factory for you query (ie. it builds the query). You are going to have to write the equivalent code anyway right, just put in a place you can test it. You can even write some basic test methods on the same object or in a separate class.
If you can , also try to use stored procedures. They are a bit harder to test as above. Some db's also don't pre-validate the sql in stored procs against the schema at compile time only at run time. It usually involves say taking a copy of the schema structure (no data) and then creating all stored procs against this copy (in case the db team making the changes DIDn't validate correctly). Thus the structure can be checked. but as a point of change management stored procs are great. On change all get it. Especially when the db changes are a result of business process changes. And all languages (java, vb, etc get the change )
I usually also setup a table I use called system_setting etc. In this table we keep a VERSION identifier. This is so that client libraries can connection and validate if they are valid for this version of the schema. Depending on the changes to your schema, you don't want to allow clients to connect if they can corrupt your schema (ie. you don't have a lot of referential rules in the db, but on the client). It depends if you are also going to have multiple client versions (which does happen in NON - web apps, ie. they are running the wrong binary). You could also have batch tools etc. Another approach which I have also done is define a set of schema to operation versions in some sort of property file or again in a system_info table. This table is loaded on login, and then used by each "manager" (I usually have some sort of client side api to do most db stuff) to validate for that operation if it is the right version. Thus most operations can succeed, but you can also fail (throw some exception) on out of date methods and tells you WHY.
managing the change to schema -> do you update the table or add 1-1 relationships to new tables ? I have seen a lot of shops which always access data via a view for this reason. This allows table names to change , columns etc. I have played with the idea of actually treating views like interfaces in COM. ie. you add a new VIEW for new functionality / versions. Often, what gets you here is that you can have a lot of reports (especially end user custom reports) that assume table formats. The views allow you to deploy a new table format but support existing client apps (remember all those pesky adhoc reports).
Also, need to write update and rollback scripts. and again TEST, TEST, TEST...
------------ OKAY - THIS IS A BIT RANDOM DISCUSSION TIME --------------
Actually had a large commercial project (ie. software shop) where we had the same problem. The architecture was a 2 tier and they were using a product a bit like PHP but pre-php. Same thing. different name. anyway i came in in version 2....
It was costing A LOT OF MONEY to do upgrades. A lot. ie. give away weeks of free consulting time on site.
And it was getting to the point of wanting to either add new features or optimize the code. Some of the existing code used stored procedures , so we had common points where we could manage code. but other areas were this embedded sql markup in html. Which was great for getting to market quickly but with each interaction of new features the cost at least doubled to test and maintain. So when we were looking at pulling out the php type code out, putting in data layers (this was 2001-2002, pre any ORM's etc) and adding a lot of new features (customer feedback) looked at this issue of how to engineer UPGRADES into the system. Which is a big deal, as upgrades cost a lot of money to do correctly. Now, most patterns and all the other stuff people discuss with a degree of energy deals with OO code that is running, but what about the fact that your data has to a) integrate to this logic, b) the meaning and also the structure of the data can change over time, and often due to the way data works you end up with a lot of sub process / applications in your clients organisation that needs that data -> ad hoc reporting or any complex custom reporting, as well as batch jobs that have been done for custom data feeds etc.
With this in mind i started playing with something a bit left of field. It also has a few assumptions. a) data is heavily read more than write. b) updates do happen, but not at bank levels ie. one or 2 a second say.
The idea was to apply a COM / Interface view to how data was accessed by clients over a set of CONCRETE tables (which varied with schema changes). You could create a seperate view for each type operation - update, delete, insert and read. This is important. The views would either map directly to a table , or allow you to trigger of a dummy table that does the real updates or inserts etc. What i actually wanted was some sort of trappable level indirection that could still be used by crystal reports etc. NOTE - For inserts , update and deletes you could also use stored procs. And you had a version for each version of the product. That way your version 1.0 had its version of the schema, and if the tables changed, you would still have the version 1.0 VIEWS but with NEW backend logic to map to the new tables as needed, but you also had version 2.0 views that would support new fields etc. This was really just to support ad hoc reporting, which if your a BUSINESS person and not a coder is probably the whole point of why you have the product. (your product can be crap but if you have the best reporting in the world you can still win, the reverse is true - your product can be the best feature wise, but if its the worse on reporting you can very easily loose).
okay, hope some of those ideas help.
These are all weighty topics, but here is my recommendation for updating.
You did not specify your platform, but for NANT build environments I use Tarantino. For every database update you are ready to commit, you make a change script (using RedGate or another tool). When you build to production, Tarantino checks if the script has been run on the database (it adds a table to your database to keep track). If not, the script is run. It takes all the manual work (read: human error) out of managing database versions.
I've heard good things about iBATIS 3 Schema Migrations System:
User Guide: http://svn.apache.org/repos/asf/ibatis/java/ibatis-3/trunk/doc/en/iBATIS-3-Migrations.pdf
As Pat said, use liquibase. Especially when you have several developers with their own dev databases
making changes that will become part of the production database.
If there's only one dev, as on one project I'm on now(ha), I just commit the schema changes as SQL text files into a CVS repo, which I check out in batches on the production server when the code changes go in.
But liquibase is better organized than that!

Resources