Versioning of code and data in Oracle - oracle

Let's imagine a standard situation.
Having the current DB schema in a working state, I would like to create a snapshot of this state of schema objects, name it SNAP_1.
Then if updated schema and got problems (bugs or unstable work of new code) it would be good to switch quickly - in one command - the whole schema code back to SNAP_1.
I'm wondering is there any built-in feature of Oracle DBMS for versioning:
PL/SQL code (schema objects)
Data (for example, within configuration tables)
Does Oracle DBMS give native tools for versioning at least one of these two?

The answer is no. But Oracle 11.2+ has something called "Editions".
This method has many restrictions. For example, data and table structure cannot be versioned.
Cool thing is that separate sessions can use a different version of the DB objects simultaneously. (package before fix and after fix)
Here is oracle's documentation.EDITION and
Examples of editions

Related

How do I integrate Liquibase within an existing CI/CD pipeline in large organization?

We are working in a very big organization, many Databases (of many types), many schemas, many users.
Does LB has to work with some Source Control (for locking the files
when many users exist in the organization and using the same DB,
same Schema, etc)?
What is the best practice of working with LB in a very big
organization, many concurrent users?
Can SQLCL general sql format type or just xml format type?
Is there some integration with SQL Developer? I mean, suppose a user
changes an objects via sql developer, what happens then?
We get this type of question all the time, after folks get a handle of how to automate DB changes, next step is typically to add it into an existing CI/CD workflow.
Yes, Liquibase works with any source control. Most users are using
Git. But you can use Git, TFS, SVN, CVS... Once you are up and
running with Liquibase, you just need to make sure that your scripts
are in source control and you are good to go.
Besides 3rd party source control tools, Liquibase has tracking tables called "DATABASECHANGELOG" tables that keep track of the changes applied to your database when using Liquibase deployments.
Here is some more information about getting started and How Liquibase Works. https://www.liquibase.org/get_started/how-lb-works.html
Liquibase has one more table that it uses internally called "DATABASECHANGELOGLOCK" table.
This table was designed to prevent multiple Liquibase users running deployments concurrently - potentially leaving the Database in a bad state. Once the Liquibase deployment (the liquibase update command) is done, the "DATABASECHANGELOGLOCK" will allow the next Liquibase user to deploy.
You can use both SQL and XML formats (or even JSON and YAML formats).
When using SQL, you have a few options:
Best option is to use Formatted SQL changeLogs https://www.liquibase.org/documentation/sql_format.html
https://www.liquibase.org/get_started/quickstart_sql.html
You can use plain raw SQL files referenced from an XML changeLog
https://www.liquibase.org/documentation/changes/sql_file.html
When using XML, can find all the available change types (also called changeSets) available in the following page (on the left of the page)
https://www.liquibase.org/documentation/changes/
XML changeLog are more agnostic and sometimes can be used for different Database platforms when doing migrations. Also, many of the change types in XML have the ability to be rolled back automatically. The reason that this is possible with XML is because Liquibase uses it own built in functions to figure out inverse statements like "create table" to be "drop table".
For each of those changeSets you can find out if they are auto rollback eligible (at the bottom of the page). For example, create table changeSet will be Auto Rollback = yes.
https://www.liquibase.org/documentation/changes/create_table.html

Using TOAD data modeler with existing TOAD Oracle

I need to start a long-term project in mapping out data tables so that we can get a high-level view of what information we store in our Oracle database and how the tables are linked to each other. This is largely for GDPR preparation.
Since our organization has been around for a number of decades, its database is massive. With TOAD for Oracle, I'm able to see all columns in our tables easily, so I started looking at different database mapping tools (ER/ONE, DDM, Astah) but they all look like I need to manually create all the tables and columns and draw their relationships out by hand.
I'm hoping to minimize as much manual labor as possible and am wondering if using TOAD data modeler would help since I'm using TOAD for Oracle anyways. Could I somehow automate the table, column, and relationship creation process?
Our organization only has Oracle's base version unfortunately (I think the premium bundle has data mapper included in it maybe... not sure.) Any thoughts on the options I have?
-
Bundle: Toad for Oracle Base (64-bit), Add-Ons: <-none->
Our organization only has Oracle's base version
Note: TOAD is not an Oracle product, it is owned and developed by Quest.
they all look like I need to manually create all the tables and columns and draw their relationships out by hand
Any decent data modelling tool supports reverse engineer a physical data model from an existing schema. How good the derived model is will depend on how good your schema is (my bet: decades of development without an existing data modelling tool? not good). For instance, if your schema has foreign keys the reverse engineering process will use them to draw the relationships between tables (even if they are disabled). But if there are no foreign keys then you're on your own.
As you're using already TOAD you are right to want the TOAD modelling extension. You can buy it as a standalone purchase. But if your company won't spring for the extra licenses you should check out Oracle SQL Developer Data Modeler. It's free and it has the most comprehensive support for idiomatic Oracle. (I'm not saying it's the best DM tool of them all but it's very good for something which is free). Find out more.

Developer sandboxes for Oracle database

We are developing a large data migration from Oracle DB (12c) to another system with SSIS. The developers are using a production copy database but the problem is that, due to the complexity of the data transformation, we have to do things in stages by preprocessing data into intermediate helper tables which are then used further downstream. The problem is that all developers are using the same database and screw each other up by running things simultaneously. Does Oracle DB offer anything in terms of developer sandboxing? We could build a mechanism to handle this (e.g. have dev ID in the helper tables, then query views that map to the dev), but I'd much rather use built-in functionality. Could I use Oracle Multitenant for this?
We ended up producing a master subset database of select schemas/tables through some fairly elaborate PL/SQL, then made several copies of this master schema so each dev has his/her own sandbox (as Alex suggested). We could have used Oracle Data Masking and Subsetting but it's too expensive. Another option for creating the subset database wouldn have been to use Jailer. I should note that we didn't have a need to mask any sensitive data.
Note. I would think this a fairly common problem so if new tools and solutions arise, please post them here as answers.

Using liquibase versioning table definitions, not change sets

I'd like to have my version only the latest table definition in my repository, (no change sets), and have liquibase figure out which changes are needed when patching my databases. Please take note that I have a very big database schema (1000+ tables) installed in hundreds of customer sites, with different versions each one, and I really don't know which objects each version has
How can I make a liquibase-based installer for my application, given my set of table definitions, and hundreds of databases with about 12 different versions of objects on each one?
To be more specific, I'd like liquibase to compare my table definitions with the production database, and emit the alter table statements required to make the database current with my latest version.
I could contribute code if necessary in order to get this done
Liquibase and tools like it (for example flyway) are primarily designed to support database migrations. A migration is where every change to the DB is tracked so that it can be replayed on target environments thereby keeping them in sync with development (although time-shifted). It's all about keeping your schema under revision control.
Your use case is a little different. If I understand correctly you're trying to retrofit Liquibase onto a series of environments that you are not 100% certain match your application's current schema?
I would only recommend migration tools like liquibase if you intend to use them going forwards. If all you want is a DB diff tool, I would suggest you look elsewhere.
To perform an initial sync then I would suggest you investigate the diffLog command, coupled with changeLogSync command to initialize liquibase on the target DB.
comparing databases and genrating sql script using liquibase

chance for re-using oracle database after system migration?

I have an ERP application running with Oracle forms and Oracle database. Now I am planning to migrate this application to a java based enterprise application. will it be a good idea to keep the existing oracle database as back end and developing a web application with certain level of changes/additions in the DB design.
There are two facts to know before answer your questions:
has your database schema some oracle forms special structures or is it in 3rd normal form and simply stores data using keys and enforced referential integrity?
How much stored code contains your database?
Ad 1. Oracle forms don't have specific schema reqiurements. They work best if your schema if based on 3rd normal form. If your schema is like this, use it for new Java application. We have both forms and Java EE applications on same database schemas and is works fine.
Advantage is, if you have keys (primary, unique, foreign) in your schema. Use them when generating Java app.
Probably you will have to add #Version columns for optimistic locking (see https://docs.oracle.com/javaee/6/api/javax/persistence/Version.html). But there is not reason to build new schema for it.
Ad 2. Your will have to overwrite bigger part of database stored code (triggers, procedures, functions) to Java. In most cases this does not have dramatic impact to schema structure, but deal with it.
So - if your database schema is not tailored to some UI client needs AND you want only use a new client, use your schema. If not, create a new one.

Resources