At work we use Oracle (12c client) to store most of our data and I use SQL Developer to connect to the database environments.
Issue:
We have issues where tables are being modified for one reason or another (too lazy to create a new table so they add new columns and change data types or lengths). This in return will break the table for others who actually utilize it for its real purpose.
Update:
We have DEV, TST, UAT, and PRD environments. We test and have scripts approved before we promote to PRD. The problem resides in DEV when we want to go back to an existing table to make an change, but that table had already been modified for different reasons.
Question 1:
Is the versioning just for stored procedures or is it possible to track changes to table structures, functions, triggers, sequences, synonyms, etc.?
As Bob Jarvis indicates you need way more than a solution to your question. You need policies and practices enforced for all developers. Some ideas from places I have worked:
every developer has a VM machine with a copy of the database installed. They can do whatever they like on it but must supply scripts to move their changes to production. These scripts are applied on a test instance and again on a QA instance before going to production.
subversion works on all OS and tortoise works well on windows. Committing scripts to a repository works well and this is integrated with SQL developer and can be done with Toad.
you have a permissions issue. Too many people have the privileges to alter tables. Remove these permissions and centralize on one or two people. Changes are funnelled through them as scripts and oversight can be applied there. Developers can have their own schema to test or a VM with a copy for development.
run this script to see who can alter tables
select * from DBA_TAB_PRIVS
WHERE PRIVILEGE = 'ALTER'
The key is a separation of concerns. Developers should have access to a schema where they can do what they need. The company needs to know who did what, when and where.
If you have more than one developer working on multiple changes to a dev environment then you need coordination and communication as well as source control. A weekly meeting to discuss overlap areas or a heads up chat message are just some ways to work together.
The approach I think works best, is to have a DEV database where all the developers manage their own set of schemas.
Scripted builds are provided with test data loads to allow any developer to create his own working schema. He then works on there, tests his changes and then commits his changes via scripts to the source control. DEV databases do not need to be large, just need enough test cases to allow for unit tests.
Script all the changes so that they can be checked into a version control system, and merged with other changes. The goal is to have a system where devA checks in changeA, and then when merged with the main trunk, devB gets changeA as he builds his schemaA.
This approach requires care if the main project schema employs PUBLIC synonyms. You will need to consider this as you go forward.
I would also advise with each change checked in an accompanying back out script should be checked in.
The advantage of this approach is that devs can manage their own schemas. With a scripted approach they dont all need to have DBA knowledge, and don't need to manage the database either. having all these on one database makes it easier to manage and control resources.
I've used this approach in teams with 50+ developers and it has worked very well.
This approach also paves the way for having devs checking scripts in and having a automatically creating a deployment package.
There is so much that can be done to make the development-test-deploy-backout cycle easier to manage.
Related
When we create Pull Requests in GitHub it auto triggers a dbt cloud job that runs a test build of our models. The database in Snowflake for this build is called "Continuous Integration". In this database we have hundreds of schemas going back almost 2 years. Is there any reason to keep these schemas and tables? I sure would like to do some cleanup.
You should be able to delete these old schemas with no consequence.
Each of these schemas is built based on the change introduced in an earlier version of the code & (depending on how you set up your github action) either using a pre-defined test data or the raw data available at the time of the test begin run.
These CI jobs can serve two use-cases.
[primary] test the code works & data validation tests pass
they can act as a way to do time travel, which I'll describe below.
The first use-case does not need the artifact to be preserved after once the job runs
The second use-case may be important to you in trying to debug reports that were generated many months ago.
Example: lets say the finance department wants to know why a historical value of active users has changed in the latest report. this may have been an error that was fixed within your dbt logic, or perhaps the active users was pulled with an incorrect filter from your BI layer, if you had dbt artifacts built from that era, you would be able to use it to look for any dbt level changes.
How far back do you think you'd need the artifacts for time travel? Check with your stakeholders and come up with a time frame that works for your business, and you can delete all the CI artifacts built prior to that date.
I have Azure Devops for Azure databases using Dacpacs.
I can easily deploy schema from Dev to Test and Prod.
However I have a issue. Dev databases have several Dev only tables that I don't want to deploy to Test and Prod.
Excluding certain tables manually with Visual Studio have resulted to human errors and certain non wanted tables have been deployed to prod.
It there are solution for making sure that Dev only tables are automatically excluded from Dacpac?
Possible to automatically filter if table name starts with "Temp*"?
No, you won't be able to request that it just include certain object kinds. Yes, you can ask it to exclude certain object types (/p:ExcludeObjectTypes), allowing you to filter to exactly what you want while eliminating anything else. Using the DacFx API's programmatic paradigm, you can accomplish more targeted/convenient things, but it requires programming code.
You can use sqlPackage.exe to limit the modifications by using the /p:ExcludeObjectTypes argument to indicate the kinds you don't want to deploy.
Use the following as an example:/p:ExcludeObjectTypes="StoredProcedures;ScalarValuedFunctions;TableValuedFunctions"
The following is a list of potential ExcludeObjectTypes arguments: https://learn.microsoft.com/en-us/dotnet/api/microsoft.sqlserver.dac.objecttype?view=sql-dacfx-150
I am new to OBIEE tool , hence kindly bear with me if my query is basic in nature.
I have 2 RPD files, a.rpd and b.rpd. I need to switch between these 2 RPDs on same server and through same OBIEE tool.
Do I need to deploy both RPD on server to switch between these two through same OBIEE tool?
As per my own attempt, I can open both RPD file through Administration (obiee tool) : File --> Open-->Offline and without any deployment.
Is it mandatory to deploy both RPD at server to open it on line?
I guess I need to define 2 different ODBC system data sources for my repositories after deployment.
Thanks,
I got the answer to my queries through my own research, hence sharing in below so that others can be benefitted :
1) OBIEE designed to work with a single repository.
OBIEE has a single repository at any point in time. You can deploy A.RPD, use it and after a bit deploy B.RPD and use it. But it's either A or B and you will not have both on the server.
2) You can merge A and B together (the Admin tool allows you to do that and you obviously need unique names inside both or they will override things) if you want to have A+B deployed.
it's possible to safely merge 2 RPD which would have different business models and different subject areas and different physical sources. In case of conflicts you must solve it: keep A or replace it with B. It's like when you have to manage conflicts in versioning control systems etc.
3) However you can open both files locally, "offline" mode , for that all you need is the file itself.
4) It's also safer to work offline as you can do the whole work and then verify the RPD and only once you did everything you upload. If you work online and start doing changes but don't finish your work, people will be using an OBIEE system with a RPD half done. This could lead to errors. Also working online has some constraints because of how check in and out works.
Thanks,
Situation
Oracle APEX (version not specified)
Single Application
Administration Issue: Deployment of New App version.
Detail
The latest version is on Server1
End Users are actively working on an older version hosted on Server2.
How do I import the changes made on Server1 without impacting users who may still be working on Server2?
Some Basics on Deploying APEX App Upgrades
It's always good etiquette to warn users that an upgrade will be in progress. Give a few days advanced notice and a window of time you will need to accomplish the task. In this case, as I will explain, you can install your new upgrade and switch over to the new version quickly.
Use an Application Alias
Use an Application ALIAS to identify your application to get away from the arbitrary, sequence controlled ID.
This is where to identify an APP ALIAS
In this example, the Alias AND the ID can be used. I recommend to publish the ALIAS to the users and the support staff who make the little shortcut icons on everyone's desktop:
http://sakura.apex-server-01.guru:8080/apex/f?p=ALIAS
Where "ALIAS" is whatever you've assigned to the app (such as 'F_40788'). Aliases must be unique across an entire INSTANCE, or you can set up some clever redirects using Oracle's RESTful Web Service builder.
How to Switch Your Live Application to Maintenance Mode
The best way to avoid any unwanted DML or user activity from end users is to lock the front-end application right before you switch over to the new version.
This will prevent anything from changing the state of the data during the upgrade... in answer to the question, if a DML (insert, update, delete) activity initiates when the app is overwritten, either the transaction fails and rolls back because it didn't reach the COMMIT step.. or worse. You're better off just locking up for a few minutes.
How to Set an Application to Maintenance Mode
Rename your current version to the permanent ALIAS and archiving the one it replaced. It's better not to overwrite or immediately delete your older versions.
Multiple Versions Co-existing in the same Workspace:
It is equally as useful to check in the exported application definition scripts as they are encoded in UTF-8 plain text SQL. The benefit is that source code diffs can identify the differences between ver
As long as their access is restricted, and their alias changed to a unlisted value, they serve as a good fallback for any unanticipated issues with the new, current release.
I've just started learning .NET MVC so this may be a silly question, but I've yet to find a good answer.
I'm following the Code First approach using the Entity Framework to build my database for me. I've included the following in my Application_Start() method in order to allow me to edit my database by making changes to my Model objects.
Database.SetInitializer<ContactManagerDB>(new DropCreateDatabaseIfModelChanges<ContactManagerDB>());
I was just wondering what would happen if I pushed this application to a production environment and then made a few changes to my models and then updated the application? Would this really drop and recreate the database in the production environment?
What's the best practice for pushing changes to production env. using the Code First approach?
DropCreateDatabaseIfModelChanges should only be use early on in development, never on a production machine. If you pushed to a production machine and made schema changes, you'd loose all your data.
You could delete the EdmMetadata table in your production environment. In that case, EF would not know the current schema to compare to the new, so it would just assume you know what you are doing and it would not touch the database schema.
Code first does not have the ability to upgrade your database while keeping your data intact.