How to track changes made in the ERD in oracle designer - oracle

I want to track all changes made in my ERD in oracle 10g designer.
Please suggest how to achieve it.
use case: An ERD was approved in design phase. Few changes were made by development team and now in Implementation phase I need to track what all changes were made from say 3-Jan-12 to 3-Mar-2012
To add here:
1. In my project versioning of ER is not being used so ER Version diff is not an option for me.
2. In Reports, there are options for getting all entities, attributes which are created/modified in a period. But it seems it is not giving me correct results.

Oracle Designer 10g does support the SCM repository, which is the only real way of tracking changes to our models. However, documentation is thin on the ground, probably as a matter of policy since Oracle pulled the plug on Designer as a tool with a future.
Oracle SCM is very powerful but not at all intuitive. This should be obvious as we're working with rows of data rather than files like in most other source control scenarios. Anyway, the upshot is, doing what you want to do won't be easy and you'll need resilience and persistence. You might be able to find some old lags in the ODTUG Designer forum who still have some interest in the tool.
The other thing is, if you haven't already enabled the SCM you won't be able to retrieve the changes in the ERD over the last three months except through data mining.

Related

Star Team to TFS 2010 Migration with history

I want to migrate from star team 2005 to TFS 2010 with HISTORY. Is there any tool or any way where I can do it cost effectively. I know about Timely Migration tool, but it is too expensive.
There is no tool out there to do this. You are stuck with paying for Timely Migration or writing it yourself. Capturing history from StarTeam is extremely complex. The reason for this is because of what the view looks like historically. You can roll back the view to a point in time, and this alone works very well, but rolling back to every point in time where a change happened to the view is practically impossible to do using the API. This is because 1) not everything has an Audit record, so you can't use the audits, and 2) the audit records are purged, 3) there is a special feature to "play back" the history of the view to generate the listener events (requires MPX), but this will miss many events, 4) when items are shared, configured, branched, etc., these do not generate any audits in the project, 5) even if they did, getting every single change requires iterating through the view history down to the second to get all of the changes by analyzing the differences. So that means if your project has been active for a month and each time you analyze the two view configurations to diff them it takes 5 seconds, then actually migrating your project would take 5 months, and in the meantime it would be locked down.
So the next way to do this would be to establish "baselines" to compare. Using build labels is a good starting point if you have nightly or continuous builds in your project, or even just certain builds that were QA certified. This way you can use these baselines as points for diff/compare and then bring in the history that way. While this isn't as granular as a full history, it is by definition getting the most important differences to migrate over.
However, keep in mind that even doing it this way does not maintain the links between branch/merge points across the different branches/views. The only way to do this would be to go directly into the StarTeam database to get this information.
I went through all of these steps to try and write my own suite of tools to migrate from StarTeam to Subversion. It was fun and interesting and imperfect but had some promise, but ultimately never finished it. Part of the reason was because the time involved would have been far more than the value I got from it.
Which brings you inevitably to the most important question: what is the business value of maintaining full history? After going through this many times with project teams as a StarTeam administrator, more than 90% of the time it was readily apparent that the better approach is a cut-over. Make a time where you can begin working on new work in the new system and freeze work in the old system. It usually can be done with very little down time for the project team. You can even start by bringing over a history of Production releases to create a rough timeline in the new system. Use your existing comparison tools, either in TFS or BeyondCompare or elsewhere, to reproduce each state of your project source code, doc, etc. and reconcile it with your TFS project by checking in or deleting files as needed, and labeling your TFS project for each build you bring over. Line up all your TFS builds, work items, users, roles, etc, and make sure everything is ready. Then at time of cut-over, take the latest development snapshot from StarTeam over and do one more update to your TFS project. Lock your Starteam users out of the project (for checkins anyway), and begin working in TFS. Your TFS project will have a rough history of the most significant baselines and you will be able to keep your StarTeam repository open to users in case more history is needed.
One other thing to consider is how to create a permanent archive of your project. If your repository is small enough it is doable, but gets more time intensive the larger your project is. First, copy your entire database and vault to a separate instance and get that copy up and running. Then delete all other projects EXCEPT the projects you want to archive off. Run an online purge and make sure to run it to completion. You may need to restart your server and purge several times. When you are done, your entire repository should contain only the files and database records that are needed by your project. At this point you can back up your database and vault and keep them indefinitely. This reduces the size of your existing StarTeam repository.
Haven't used StarTeam in over 3 years but that was a fun ride back in time. Hope you found it useful.

How to keep Sitecore database consistent?

We have 5 environments - Development, UAT, Staging, Live and DR.
Having more than 100 content editors, makes the Live Sitecore database content grow faster.
So almost every fortnight the content tree is out of sync with Development and UAT environment. When we try to develop new things, it is out dated content and sometimes new functionality breaks the live environment.
Please can anyone suggest an ideal way of keeping all the Sitecore databases in sync apart from creating packages and updating regularly so that we can follow a proper CI?
RAZL is not a solution that you should rely on for Continuous Integration, it's merely a database comparison tool.
Setting up proper CI for Sitecore is exactly what I'm doing for my current project and this is what we came up with:
TDS:
If you are willing to spend money, then take a look at TDS (Team Development for Sitecore).
It integrates with Visual Studio and provides you with tools for serialization of Sitecore items which you can then store in your source control.
A build server would then be able to pick up any changes in those serialized files and deploy them to your Test, Staging and even Production environment.
Alternative:
A free alternative to this is to use a combination of three open source modules:
Unicorn (for automatic serialization of your changes to Sitecore
items)
Courier (for package generation based on serialized items)
Sitecore Ship (for automated deployment of Sitecore packages)
I'm working with the free alternative myself at the moment and it works great.
Have you come across RAZL, it is a Sitecore Database Comparison Tool.
This is what they say about Razl:
Razl allows developers to have a complete side by side comparison between two Sitecore databases; highlighting features that are missing or not up to date. Razl allows you to find that one missing template, move it to the correct database.
It is quite incorrect to call Razl 'merely a database comparison tool' - from the first release, you could copy subtrees from one Sitecore database to another.
The initial drawback was that it could not be automated, but with Razl 3.0 (I think it started with Razl 2.4), Razl scripting was added, so you can easily automate Sitecore database syncing between environments.
To see how others use it, see Sean Holmesby's comments:
https://community.sitecore.net/developers/f/8/t/1767
and Nikola Gotsev's comments:
https://sitecorecorner.com/2014/10/27/the-amazing-world-of-razl-part-1/
It is very inexpensive, and with v3.0, it is much more powerful than the initial release, which required manual manipulation via the GUI interface.

MVC3, RavenDb, Web Publishing, and Source Control

I am using the embedded version of RavenDb and have put the physical database in the App_Data folder, based on this article http://msdn.microsoft.com/en-us/magazine/hh547101.aspx. My first question is, what portions of the db need to be committed to the SCM repo?
The second question is, My workflow is such that I'll also use web publishing directly from my laptop, are there any concerns using this methodology?
Thank you,
Stephen
There's no need to put your database under source-control since your documents have no particular schema. They will be created on the fly when serialized into json. So as long as you check in your C# classes, you're all fine.
First, are you aware that RavenDB uses the AGPL license? This license requires that you publish your project as open source if you are not paying for a commercial license.
They do offer free licensing in some cases but you must contact them and get a license. Check their licensing page for more details.
Second, You proably shouldn't check your database into your SCM. Databases change frequently, and SCM is designed for files that not constantly changing. You might want to check in your database schema as it changes... but not the database itself.
Regarding your second question, i'm not sure what concerns you're talking about. Can you be more clear about what your concerns are?

DB design strategy in Visual Studio

I'm currently investigating ASP.NET MVC 2 and LINQ to SQL. It all looks pretty cool. But I have a few application and development lifecycle issues.
Currently, I design the DB in SqlServer Management Studio.
Then I update my DBML files by deleting and re-importing modified tables.
Issues:
I can't find how to simply update the whole DBML schema.
My DBML then loses some of the changes I made such as renaming relation members or mapping of some int to an enum.
If I want a SQL script to deploy my DB (or to keep the schema under source control), I need to go use the 'Genererate Script' SSMS wizard which would be cool if a) it could remember my settings and b) it could be automated.
Should I work the other way around (start from my DBML and generate the DB)? Should I go for some other framework (NHibernate? Can I use some Linq flavor with it?)
Also, I read that LINQ2SQL is already obsolete in favor of Linq to Entities. Does it mean that the ultimate tool supposed to make my life so much better will again make me lose time in the long term?
Thanks for shedding some light.
If you are starting your DB Schema from scratch you could consider "Code-First Development with Entity Framework 4" as outlined by Scottgu.
I have been using this on a new project and am finding it extremely beneficial - especially for testing.
I started with simple POCO classes representing my data, then as the project progressed I would allow EF4 to generate the schema to a "real" DB using my "in-memory" example data ... now I am using a mixture of both in memory POCO (for development and TDD) and auto-generated DB Schema (auto-loaded with more "realistic" data) for demonstrations etc ... so far I am very happy.
There is a lot of opinion over LINQ2SQL and whether it's 'obsolete' or 'discontinued'. But it is still in the .NET framework and a good tool, so if it suit your needs then you should use it. Frankly the Entity Framework is still not perfect and if you don't need the extra flexibility that it affords then it is not worth the pain. If I had a small to midsize project then I would definitely use LINQ2SQL again (and over EF).
As for your question, yes you'll lose any names or different type mappings when you remove and re-add a table. The options that I'm aware of are
Only remove / re-add the table that has changed (not all tables)
Try altering the DBML tables in place, rather than remove / re-add. You can add and remove columns, change column names and data types, add relationships all on the DBML.
I like JcMalta's suggestion of creating objects as classes before rendering into the database, but if you find SQL Studio to be quick to develop with then it might simply be quickest to create tables there are drop them into your DBML. It's a touch annoying to have to change something in a database and the push the changes into your code but the code-gen tools are quite good and take away most of the pain.
You can try CodeSmith/PLINQO to auto-sync DB/code:
http://plinqo.com/
As a follow-up, just wanted to say that I eventually found and fell in love with Huagati DBML/EDMX Tools.
To be totally honest, I must say that the price has significantly increased since I purchased it. I believe it is still worth the money anyway.
And for people who are looking for the same kind of tool for MySQL (or other), DevArt is your friend.

How do you work on Oracle packages in a collaborative, version-controlled environment?

I'm working in a multi-developer environment in Oracle with a large package. We have a DEV => TST => PRD promotion pattern. Currently, all package edits are made directly in TOAD and then compiled into the DEV package.
We run into two problems:
Concurrent changes need to be promoted on different schedules. For instance, developer A makes a change that needs to be promoted tomorrow while developer B is concurrently working on a change that won't be promoted for another two weeks. When it comes promotion time, we find ourselves manually commenting out stuff that isn't being promoted yet and then uncommenting it afterwards...yuck!!!
If two developers are making changes at the same exact time and one of them compiles, it wipes out the other developer's changes. There isn't a nice merge; instead the latest compile wins.
What strategies would you recommend to get around this? We are using TFS for our source-control but haven't yet utilized this with our Oracle packages.
P.S. I've seen this posting, but it doesn't fully answer my question.
The key is to adopt a practice of only deploying code from the source control system. I'm not familiar with TSF, but it must implement the concepts of branches, tags, etc. The question of what to deploy then falls out of the build and release tagging in the source control system.
Additional tips (for Oracle):
it works best if you split the package spec and body into different files that use a consistent file pattern for each (e.g. ".pks" for package spec, and ".pkb" for package body). If you use an automated build process that can process file patterns then you can build all of the specs and then the bodies. This also minimizes object invalidations if you are only deploying a package body.
put the time in to configure an automated build process that is driven from a release or build state of your source control system. If you have even a moderate number of db code objects it will pay to be able to build the code into a reference system and compare it to your qa or production system.
See my answer about Tools to work with stored procedures in Oracle, in a team (which I have just retagged).
Bottom line : don't modify procedures directly with TOAD. Store the source as files, that you will store in source control, modify then execute.
Plus, I would highly recommend that each developer works on its own copy of the database (use Oracle Express, which is free). You can do that if you store all the scripts to create the database in source control. More insight can be found here.
To avoid 2 developers working on the same package at the same time:
1) Use your version control system as the source of the package code. To work on a package, the developer must first check out the package from version control; nobody else can check the package out until this developer checks it back in.
2) Don't work directly on the package code in Toad or any other IDE. You have no clue whether the code you are working on there is correct or has been modified by one or more other developers. Work on the code in the script you have checked out from version control, and run that into the database to compile the package. My preference is to use a nice text editor (TextPad) and SQL Plus, but you can do this in Toad too.
3) When you have finished, check the script back into version control. Do not copy and paste code out of the database into your script (see point 2 again).
The downside (if it is one) of this controlled approach is that only one developer at a time can work on a package. This shouldn't be a major problem as long as:
You keep packages down to a reasonable size (in terms of WHAT they do, not how many lines of code or number of procedures in them). Don't have one big package that holds all the code.
Developers are encouraged to check out code only when ready to work on it, and to check it back in as soon as they have finished making and testing their changes.
We use Oracle Developer Tools for Visual Studio.NET...plugs right into TFS
we do it with a Dev database for every stream, and labels for the different streams.
Our Oracle licensing gives us unlimited dev/test instances, but we are an ISV, you may have a different licensing option
You can use the Oracle developer tools for VS or you can use sql developer. SQL developer integrates with Subversion and CVS and you can download it for free. See here: http://www.oracle.com/technology/products/database/sql_developer/files/what_is_sqldev.html
We use Toad for Oracle with the TFS MSSCCI provider against TFS 2008. We use a Custom Tool that pulls database checkins from source control and packages them for release.
To my knowledge Oracle Developer Tools for Visual Studio.Net doesn't have any real source control integration with TFS or otherwise.
You might consider Toad Extensions for Visual Studio though it's not cheap, maybe $4k I think.
Another option is the Oracle Change Management Pack but believe it requires the Enterprise edition of Oracle which is much more pricey.
You may be interested in Gitora www.gitora.com. It helps managing Oracle database objects with Git.
This article about collaborative development with the Oracle database can also be helpful: http://blog.gitora.com/plsql-how-to-develop-two-features-simultaneously-but-deploy-only-one/
Full disclosure: I am the developer and author of the article.

Resources