I used the migration utility to migrate schema and data from my dev environment to my test environment. In doing so, the default dev business units were migrated to test. How do I wipe that business unit that got migrated?
I’ve wiped out every dependency I can think of. Only problem is that the dev default team got migrated as well. Can't change that to correct business unit and can't delete it.
You can rename that BU & team to match your expectation. We need to keep the BU & Team records guid in sync across environments. That’s why we are migrating the data using utility, otherwise we can create them manually & create confusions/issues like WF not activating after solution import during deployment.
For example if your Dev BU & team has some naming convention different than Test instance - then you may rename it.
Related
I have Azure Devops for Azure databases using Dacpacs.
I can easily deploy schema from Dev to Test and Prod.
However I have a issue. Dev databases have several Dev only tables that I don't want to deploy to Test and Prod.
Excluding certain tables manually with Visual Studio have resulted to human errors and certain non wanted tables have been deployed to prod.
It there are solution for making sure that Dev only tables are automatically excluded from Dacpac?
Possible to automatically filter if table name starts with "Temp*"?
No, you won't be able to request that it just include certain object kinds. Yes, you can ask it to exclude certain object types (/p:ExcludeObjectTypes), allowing you to filter to exactly what you want while eliminating anything else. Using the DacFx API's programmatic paradigm, you can accomplish more targeted/convenient things, but it requires programming code.
You can use sqlPackage.exe to limit the modifications by using the /p:ExcludeObjectTypes argument to indicate the kinds you don't want to deploy.
Use the following as an example:/p:ExcludeObjectTypes="StoredProcedures;ScalarValuedFunctions;TableValuedFunctions"
The following is a list of potential ExcludeObjectTypes arguments: https://learn.microsoft.com/en-us/dotnet/api/microsoft.sqlserver.dac.objecttype?view=sql-dacfx-150
There are duplicate Business Rules in the Dynamics 365 unmanaged solution in the Development (organization) environment.
We are required to export Development as a managed solution (customizations are overwritten, Sdk Messaging is enabled) and import the solution into another Dynamics 365 environment known as the Test (organization) environment. Both the environments require us to log in with our credentials (example arommie).
Since Development has many duplicate business rules created and left behind by another team who transitioned to us.
Each time I import a solution from Dev into Test the import rolls back and the error log shows the following info:
Error while importing workflow {7cfb65d9-966b-e911-b80c-00505683fbf4} type PortableBusinessLogic name "BUSINESS RULE ROMIE 1".
The import log screen showing the error on the first row
A record with these values already exists. A duplicate record cannot be created. Select one or more unique values and try again..."
I have even tried by restoring the test environment, then importing Development into Test (managed). It deploys the first time without rolling back. Second time we tried deploying it rolls back.
I even tried to deactivate the duplicate Business rules. One duplicate business rules deactivates. The other duplicate business rules error out and will not allow me to deactivate them. These duplicate rules exist in the Production (organization) environment (managed).
Note: BUSINESS RULE ROMIE 1 is a sample name and implies a Business Rule.
Is there a quick solution fix you can suggest I can do to import the managed solution to Test without it failing and rolling back? The solution should also work while importing into the Production environment.
At work we use Oracle (12c client) to store most of our data and I use SQL Developer to connect to the database environments.
Issue:
We have issues where tables are being modified for one reason or another (too lazy to create a new table so they add new columns and change data types or lengths). This in return will break the table for others who actually utilize it for its real purpose.
Update:
We have DEV, TST, UAT, and PRD environments. We test and have scripts approved before we promote to PRD. The problem resides in DEV when we want to go back to an existing table to make an change, but that table had already been modified for different reasons.
Question 1:
Is the versioning just for stored procedures or is it possible to track changes to table structures, functions, triggers, sequences, synonyms, etc.?
As Bob Jarvis indicates you need way more than a solution to your question. You need policies and practices enforced for all developers. Some ideas from places I have worked:
every developer has a VM machine with a copy of the database installed. They can do whatever they like on it but must supply scripts to move their changes to production. These scripts are applied on a test instance and again on a QA instance before going to production.
subversion works on all OS and tortoise works well on windows. Committing scripts to a repository works well and this is integrated with SQL developer and can be done with Toad.
you have a permissions issue. Too many people have the privileges to alter tables. Remove these permissions and centralize on one or two people. Changes are funnelled through them as scripts and oversight can be applied there. Developers can have their own schema to test or a VM with a copy for development.
run this script to see who can alter tables
select * from DBA_TAB_PRIVS
WHERE PRIVILEGE = 'ALTER'
The key is a separation of concerns. Developers should have access to a schema where they can do what they need. The company needs to know who did what, when and where.
If you have more than one developer working on multiple changes to a dev environment then you need coordination and communication as well as source control. A weekly meeting to discuss overlap areas or a heads up chat message are just some ways to work together.
The approach I think works best, is to have a DEV database where all the developers manage their own set of schemas.
Scripted builds are provided with test data loads to allow any developer to create his own working schema. He then works on there, tests his changes and then commits his changes via scripts to the source control. DEV databases do not need to be large, just need enough test cases to allow for unit tests.
Script all the changes so that they can be checked into a version control system, and merged with other changes. The goal is to have a system where devA checks in changeA, and then when merged with the main trunk, devB gets changeA as he builds his schemaA.
This approach requires care if the main project schema employs PUBLIC synonyms. You will need to consider this as you go forward.
I would also advise with each change checked in an accompanying back out script should be checked in.
The advantage of this approach is that devs can manage their own schemas. With a scripted approach they dont all need to have DBA knowledge, and don't need to manage the database either. having all these on one database makes it easier to manage and control resources.
I've used this approach in teams with 50+ developers and it has worked very well.
This approach also paves the way for having devs checking scripts in and having a automatically creating a deployment package.
There is so much that can be done to make the development-test-deploy-backout cycle easier to manage.
I want to understand how people are handing an update to a production app on the Parse.com platform. Here is the scenario that I am not sure about.
Create an called myApp_DEV. The app contains a database as well as associated cloud code.
Once testing is complete and ready for go-live I will clone this app into myApp_PRD (Production version). Cloning it will copy all the database as well as the cloud code.
So far so good.
Now 3 months down the line I want have added some functionality which includes adding some cloud code functions as well as adding some new columns to the tables in the db.
How do I update myApp_PRD with these new database structure. If i try to clone it from my DEV app it tells me the app all ready exists.
If I clone a new app (say myApp_PRD2) from DEV then all the data will be lost since the customer is all ready live.
Any ideas on how to handle this scenario?
Cloud code supports deploying to production and development environments.
You'll first need to link your production app to your existing cloud code. this can be done in the command line:
parse add production
When you're ready to release, it's a simple matter of:
parse deploy production
See the Parse Documentation for all the details.
As for the schema changes, I guess we just have to manually add all the new columns.
My office is growing and ive been tasked to build out the IT for our web development.
Whats the best tool/setup for doing web development in a group setting? The requirements are a centralized code repository, a location to test development code on, and finally a way to push tagged code out to a staging server. What im thinking is svn/redmine for code repo, each user has an account on a central development machine to allow for ssh access(eclipse over ssh) and their own virtual host on the dev server which gives everyone a centralized development sandbox. Code is written and tested on this dev box then checked back into svn and later tagged and pushed out to the staging server. Yeah? Thoughts comments or recommendations?
*Also, in a dev environment what is the best way to handle databases? Is it wise to pull from the production database? Also should each developer have his/her own db or work off a master db?
**We are building a magento application and also have some custom backoffice tools that run on cakePHP.
Although this subject is off-topic in StackOverflow and flagged so then you need to concentrate on following areas:
VERSION-CONTROL
GIT has all the glory and you don't need your own box for this as https://bitbucket.org/ offers unlimited data and private/public repos and you can set your codebase there. http://github.com is also powerful and de facto most popular version-control oriented tool out there although it comes for a small price
so your master branches live in your version control and your devs will checkout frpom there and commit to it as well
your deployment tools will deploy data to your live and staging environments from your master
ENVIRONMENTS
usually three are used LIVE, STAGE, DEV
LIVE is well live and only approved code gets deployed there
STAGE is pre-live environment and should be exact replica environment according to LIVE so all things can be tested there by merchant
DEV is cool to have exact replica but can as well be on developers local env and is ment for loose testing and experimenting
DATABASES AND DEPLOYMENT
mysql databases are pain in the ass to sync so you better have a script for it that syncs from live to others and prevent syncing from other environments to LIVE. This limitation also requires that all the configuration and content will be added from LIVE only and only then synced down the line. Every change to schema or permanent setting should be handled by update scripts (As we are talking MAGENTO CE , MAGENTO EE has migration built in)
for deployment I also suggest you to build a fabric or capistrano script that resets dev and staging environments, handles database reset and pull from LIVE DB, and imports code from central repository.
it's also a good idea to target the following everyday tasks:
clients needs to reset the stage for it's tests
project manager, developer or testers need to test so spawning a test clone should be oneclick action (take current db and code and make it live in some subfolder for specific test only) as well as deleting the test
3rd party devs might need access to specific test or dev environment (this is actual with magento as in average there are at least 10 external extensions installed in every magento store)