Oracle SQLDeveloper Database Diff functionality doesn't consider dependencies - oracle

I'm working on creating a deploy script to migrate new development from our dev server to our uat server. Unfortunately, the devs that made the changes didn't script them out at they coded. The easiest way for me to approach this is to use SqlDeveloper's database diff functionality. It does a good job of highlighting the differences and creating a script that I can run on UAT. However, I've noticed that it doesn't take into account any dependencies. For example, a it will put a command to create a table below a table that references it in a foreign key constraint. Because the referenced table doesn't exist yet, the first table create command fails. I've seen this with views referencing packages, packages referencing packages, etc. Is there any easy to way either 1) force sql to export in a "smarter" order or 2) manually calculate the dependencies (ex: querying USER_DEPENDENCIES, etc) so that i can manually sort the file of create commands without resorting to trial and error? I guess we could consider purchasing a commercial product as long as it matched exactly what we are looking for.
Note: we will probably have to deploy to UAT multiple times in order to support testing by end-users. I am trying to automate this as much as possible so I don't have to manually recreate this script every single time!
Thanks!

Related

Update database on different environments in Joomla3.9 project

We are working on Joomla3.9 project, have different environments and are using git as vcs. So every developer works on is own branch. It would be nice to have a database compare function like in TYPO3 or Contao (see the database differences after updating the project and apply the database changes just by one click). Or like the laravel migration system.
Any developer should easily update his own lokal database after database changes where made due an extension update via backend or by another developer. And of course the staging or live system must be updated easily too. We don't want to execute sql-scripts with the changes in phpMyAdmin.
We have tried https://dbv.vizuina.com/ . This is not the 100% solution. Like there is no cli support to start the migration process by an update script on the server.
Does anyone have a solution or knows an extension that can solve this problem? Or can this be handled with core Joomla functions (maybe with a little adjustment)?
So far, I've seen three possibilities to execute modifications to one ore many extension tables
1: Use the extension - revision control in the schema table. So add a new sql-file with an increased version number compared to the version number in the schema-table for this extension. Increase also the version in the manifest.xml and zip the extension again.
Reinstall the extension via extension->manage->install. So the new sql-file with the increased version number will be executed.
2: like the point above, but install the extension via joomla update mechanism (update server).
3.: create a new sql-file in sql/folder of the extension. No version name is needed for the new file, just update.sql oder another filename. Execute this script in script.php in update()-method, after the extension is installed (in this case it's an update) again.
The third possibility might be interesting. It should be possible to trigger the update()-method with a cli command / function, so that the method can be triggered via a script on the server.
But how can I get the info, which update-scripts have already been executed? Let's say I have 3 update files in sql-folder. update-1.sql, update-2.sql and update-3.sql.
update-1.sql has already been executed. So I don't want to execute this sql-file again - only the other two.
The schema-table is only used with the first two options. Do I have the info somewhere or must I manage the infos which update-scripts have been executed myself?
The answer related to versioning database for extensions depends on whether these extensions are tightly coupled to the application or need to be reusable to other applications also.
The latter case normally means that each extension accesses its own custom tables, in which case you should keep separate versioning for the database than for the extensions.
App version history can be kept in a db_version table. Then an insert statement is added at the end of each update script (adding an incremental version number). e.g.
insert into db_version(version,author,description) values(003,'Verna.Collins', 'removing obsolete column');
Provided that you need to apply data migration on extensions also, you need to maintain a db_version_extensions table which keeps version history for each of the extensions separately. e.g.
'001' 'extension1','Mandy.Aguilar','initial version'
'002' 'extension1','Mandy.Aguilar','adding extra column'
'001' 'extension2','Edna.Potter','initial version'
'002' 'extension2','Elvira.Townsend','dropping unused table'
..etc
Each extension zip should keep initial creation script and all sql-update files(which should normally not interfere with the rest of the app tables).
After pull it will be relatively easy to execute all the scripts with filename version greater than the last version number written in the database. This should be done for the app and for each extension separately.
Now if the extensions are tightly coupled to the app, it means that they might be using/updating tables of the app. For extensions of this type, you can add the updates as part of the application updates. These extensions could even be developed at the same repo, and be kept as directories instead of zip files.
Not sure if joomla supports any tools for automating the process of performing incremental db updates, but a nice tool is flyway, with ports for command-line, maven and graddle. See: how does flyway work

I started out on PL/SQL Development on SQL Developer. How will my work in the DEV environment be pushed to QA and then eventually PROD?

I have an understanding that it might be the DBA's job to push the changes, much like a refresh, but without the DML (Data Manipulation).
Any comments/suggestions will be great!
There is no simple answer for this, but essentially the task of deploying code is similar for any computer language, with the main differences for the database component being
We can't drop and rebuild tables because we need to keep their contents.
Our code compiles in the database, so there are no binaries to deploy.
The first rule for PL/SQL development is that you should always, always work from source-controlled files, for example the code for mypackage would be in a source file named mypackage.pck (or whatever file extension works best with your chosen IDE, as long as it's not .sql). Don't edit database source code directly except for trivial testing when you don't care about keeping the changes.
Some sites only ever do incremental deployments, while others use a full teardown and rebuild from a release branch for major releases, which takes a bit more thought but is ultimately cleaner. Then deployment consists of running the scripts, recompiling the schema and perhaps running some tests and checks to ensure it has worked. You'll need a branching strategy, perhaps some kind of 'run everything in this folder' script, and ideally some tools such as Team City or Jenkins to automate as much as possible, though I don't think there is as much out there ready-made for PL/SQL as there is for more mainstream languages such as Java.
The deployment process is usually performed by an application support team as it does not require any DBA privileges unless you are creating schemas or roles etc, although some sites may organise support roles differently.
Yes. Partially it will involve DBA. But, as the developer, you probably need to provide the entire DDL script by exporting (probably in a file) to the DBA to deploy.
Check 'Importing and exporting SQL scripts' part of this link for exporting db scripts: https://docs.oracle.com/cd/B25329_01/doc/appdev.102/b25309/sql_rep.htm#BABBHEHA

Creating on demand workflow directly in Production instance

I need a temporary workflow to update records in Production instance. I was thinking to just create one in production directly and run it on records and delete it. Whats the best way to do it, should I just create it in customization area (i.e. Default solution) and then delete it later or create a new solution, add this work flow to that solution and then delete that solution?
Or should I create it in Dev and then move the solution to Test/Production like we normally do?
As a best practice, create it in Dev using a new Hotfix solution - test it against some Dev records. Then export the solution (may be Managed one) & deploy in Test/UAT/Prod.
Once data fix is completed in Prod using our WF, delete the Managed solution (which will delete the WF too).
If you want - you can just deactivate the WF in Prod: Settings - Processes for future usage & to maintain environments in sync.
Never customize in the default solution - unless you're working on a managed solution (of which you have no control in Dev/Test), or are looking for a full list of components in an entity without adding them to your Solution (example - when writing a plugin, you need to know the field names of OOTB fields - I go to Default Solution to get these).
You didn't tell us in the message what your Workflow should be doing, and if it needs to be done in DEV or TEST.
What I would do is build the WF in DEV or test, re-create the situation that needs to be fixed in PROD, and test it in DEV/TEST. Once you know it will work, you can either push it over as a solution (or as a part of your existing solution), or, you can simply add it to your solution in PROD and then run it as required. If you need to delete it, great, delete it. If not, just de-activate it so people don't accidentally run it.

How to create incremental scripts to update database schemas using Visual Studio 2010?

I'm trying to use VS 2010 Sql Server Database Project to keep track on changes made on my database and to generate appropriate scripts when a change needs to be deployed from dev to production environment.
I have created my schema comparison between my dev database and the project schema which does a great job. However, I cannot find a way to create incremental scripts, the only things I get are scripts with CREATE statements (Export to Editor option).
Am I doing something wrong?
Thanks in advance.
As part of our auto build process, we store .dbschema files for each environment in source control. During the build, we create the .dbschema file based on the database project and then use vsdbcmd command line call to generate the change script between the project schema and each destination DB schema. If you need specific command line call, let me know.
If you're using "Data Dude" correctly, these are done for you and run when you choose Deploy. Just keep your schema (tables, stored procs, populate scripts etc) as a project item and change it as you need to. The build-and-deploy process will generate the scripts. http://msdn.microsoft.com/en-us/library/ff678491.aspx is a not-bad starting point if you want to get these scripts and run them youself against various staging, production, etc databases.
In the .deploymentmanifest file there are two settings:
<DeployToDatabase>False</DeployToDatabase>
and
<DeployToScript>True</DeployToScript>
Running vsdbcmd will then generate the change scripts without affecting the target database. All you'd need is a version of the database which is the same as the production version, or access to point vsdbcmd at production to generate the script.

Managing database scripts in your solutions

I usually create a solution folder in Visual Studio and put my DB scripts in them. I always use at least this set of scripts:
Drop model
Create model script
User functions
Stored procedures
Static data (lookup tables)
Test data (not deployed)
Then I simply combine them and run against an SQL Server so I'm able to recreate the whole DB in a single step (by combining these scripts into a single one and executing it).
Anyway. I've never used projects in either:
Visual Studio or
SQL Management Studio
I've tried creating SQL Server 2008 Database Project in Visual Studio 2010, but I'm somehow overwhelmed by all the possible server settings (which I prefer to stay default as set on the server anyway). So I'm a bit confused: Should I use this project template or should I just do the same thing I always did?
What do you use and why? What are advantages I may benefit from by using either?
If I were you I would continue to do it the way you are doing it. In fact I do! The advantages of having the actual .sql files right there in a folder for you to use/edit/look at in my opinion are far better than the advantages you get by using a DB project. DB Project would be used if you were doing something like Storage Reports, were you have to communicate with like 8 databases and compare then to 8 different databases and save result sets etc... Now don't get my wrong there are advantages of Database Projects, I just don't think they are actually doing much help when you have such a simple setup that works already.
Advantages of the SQL Server 2008 Database Project in VS10:
Not having to switch back and forth
from your current client you use to
communicate with your SQL server.
Decent Data and Schema compare tools.
Gives you a one-click way to reverse
engineer a database into source
control, and keep it up to date.
You can compare projects to physical
databases and vice-versa. (This makes it pretty easy to keep your database up to date, no matter where you make change it: file system database project, or in the physical database itself)
If the current tool your using is not specifically tailored to SQL Server, this one is.
Extremely helpful if you need to do
unit tests directly on the database
without using abstractions.
If you're looking for something a little less complicated, you might want to try SQL Source Control. This won't even require you to maintain scripts, as it doesn't this for you behind the scenes. It will, however, only work as a solution for you if you use either TFS or SVN. And it costs $295...
It has a 28-day trial period, so if you're happy to try it out, I'd be interested in your feedback.

Resources