How to avoid duplication of code using ClearCase UCM components - clearcase-ucm

I thought I understood the concept of components in ClearCase UCM after reading all the posts on here, however, I'm pretty sure I'm missing something.
I created two projects A and B that each contain a component, Comp_A and Comp_B respectively, and they also each contain a shared component Comp_C as well as a rootless component Comp_D to contain the composite baselines.
Everything works fine. However, how does this prevent duplication of code? When making a change to the shared component in Project A, that change needs to be delivered (using an inter-project delivery of a baseline) for it to be visible in Project B, which means that all the files of the shared component that had changes made to them end up being duplicated in both projects' development and integration streams.
How can I set up a component to be shared by two projects without its code becoming duplicated?
Example: when I make a change to a file in shared component C in project A, deliver it to project A's Integration stream, create a baseline, and then proceed to deliver that baseline to project's B's Development stream followed by a delivery to project B's Integration stream and do a version tree of that file, I see 4 versions of the same file that are all identical. Granted, we're still talking about the same element, but isn't there a way in which two projects can share a component (only modifiable in one) without this happening?
I think what I'm looking for is for one project to be the "producer" and other projects to be the "consumers" of this component.

UCM Baseline are a way to reference a precise version of a set of files (the UCM components)
If you make a new baseline in project A / component C, that does not influence project B which keeps referencing its own initial version of the component C (with possibly its own changes)
When you deliver a baseline inter-project:
you end up referencing the exact same versions of the files labelled by that baseline label if component C in project B had not changed: no duplication there.
or you end up merging your changes from project A into the component C of project B, which had new versions of its own. Again, no duplication, simply a report of changes in order to integrate them in another development stream for a given component: there might be merge conflicts that need to be resolved.
when I make a change to a file in shared component C in project A, deliver it to project A's Integration stream, create a baseline, and then proceed to deliver that baseline to project's B's Development stream followed by a delivery to project B's Integration stream and do a version tree of that file, I see 4 versions of the same file that are all identical.
One way to avoid that is to:
reference C as non-modifiable UCM component in project A or B, which means you can, in any Stream, rebase and change its baseline at will, without having to deliver C ever.
Change C in a dedicated Project C, in which C is modifiable, and in which you make a C baseline (which you then can rebase in project A or B, no need for inter-project delivery)

Related

Order Automated TFS Builds (Mac)

I am using TFS 2015 to perform automated builds of my libraries in a cross-platform environment. I just added support for building my libraries on macOS but I am unable to figure out how to order the builds. Here is my situation:
I have libraries A, B, and C (in separate build definitions). Library B depends on library A, and library C depends on library B. Libraries A, B, and C have a small bit of overlap in terms of shared files in TFS, so when a user checks in files from these overlapped directories, all of the libraries are built, but in random order. I need to be able to build library A first, then library B, then library C.
Any help in configuring this (in a way other than creating a single build definition)?
Unfortunately, there is no such feature to control the order of vNext build and move items up / down in the queue.
There is no logical order and without any configuration to control this.
I have created a uservoice for you, you could vote up and monitor it. TFS PM and Admin will kindly review the suggestion.
Order TFS vNext Builds
https://visualstudio.uservoice.com/forums/330519-team-services/suggestions/19960372-order-tfs-vnext-builds
For now you have to use the workaround, creating a new single build definition to combine them.

Cruise Control .NET two project same working directory

in ccnet wiki in the project block, workingDirectory I read: "Make sure this folder is unique per project to prevent problems with the build." I want to do two project that has the same working directory... what are the "problems with the build" that can occur and how can I overcome these problems?
Edit:
My situation: I have two applications in the same trunk that has some common code but if I made commit to one of the applications I don't want the other application to build and increase its version number but if I made a change to the common code I want both of them to trigger a build. My source control is SVN and I use Filtered block to include only the files I want to trigger a build.
Option 1
Have a single project that builds just the common code. This should emit its built assembly to a known location outside it's working directory.
Have 2 other projects that build the other parts of the solution. Each only listens to changes to its particular source control paths. Each project can incorporate/reference the built assembly from the known location
The 2 other projects can be forced from the common project using a forceBuildPublisher.
These projects should be in the same queue to prevent the common project rewriting the built common assembly whist its trying to be reference by the other 2 projects.
Option 2
Have 2 individual projects that both build the common source code and the specific code together. Say by building a solution file which contains both sets of projects.
This is the simpler option, but does lose you the neatness of having a 'common assembly version number'
Pros and Cons
Option 1
You have a single version no for each version of the common code.
Its more prone to issues due to the additional complexity.
You need to maintain a known location outside of the working directory so that it is not deleted/cleaned by the build process.
Option 2
Simpler solution.
Common code version is lost in the version of the dependant assembly.
If I had to suggest one, I would opt for for option 2, purely because its simplicity reduces chance of other issues.
Working directory is a place where Cruise Control puts the source code of your project to, and this is also where the build process is happening. If you point two projects to the same working directory, you can end up with any kind of conflicts you can guess. Source files of project A and project B can mix up, the build process might break because of unknown state of the build folder, etc.
It's quite natural to separate unrelated things, and in this case it's a call of common sense. Besides, I can hardly imagine a situation when you have to put 2 projects into the same working directory.

Best practice to mange multiple projects with a (maven) repository and Maven/Gradle?

This is not an exactly Gradle problem. But a more general build question. We have some projects structured as the following:
-- A (by team 1)
-- build.gradle
-- settings.gradle
-- B (by team 1,3)
--build.gradle
-- C (by team 2)
-- D (by team 3)
They are separate CVS modules. A, B and D are java projects and managed by two teams (team 1,3). C is an C++ project managed by another team (2). B use C via JNI. (The fact that C is C++ is not that important, the key is that it is developed and managed by another team.) There are two applications, A is the entrance of the application 1, A->B->C; while D is the entry point of the application 2, D->B->C. The development often involves change in all three levels. All three teams sit together and communicate constantly. In practice, we (team 1) might need some changes for application 1 in C; team 2 works on C and gives us a temporary copy; we might need some further changes after integration. We will go back and forth for several rounds for one problem. Similarly for application 2.
Currently, A,B and D are managed by various ant scripts and C by make. We started to explore new tools, in particular, Gradle. In our first cut, A includes B as a sub-project (in the Gradle sense) and they always are built and published together. We also always use the head of C (compiled ourselves from source in windows or grabed the latest Jenkin build) and when we are happy with the build, we tag all three projects. We recently adopt an internal Artifactory repository. We are thinking about how to mange the dependency and versioning. Once we finished, we will introduce it to team 3 for module D as well.
We can try to include C as a subproject for A and then always build from the scratch for all three. Similarly include C and B as subprojects for D. The application name can be in the version name for example.
alternatively we can always depends on a fixed version of C in the repository.
In 2, we cannot totally depends on the head/snapshot of C because that might involve their development and might be unstable. But we need their change so frequent that it seems inpractical to put a new version for every C changes.
We are wondering what is the best practice for such a setup? A quick search in internet does not yield much results. Can anyone give us some advices or point me to some books or documents?
Thank you so much.
As I can read from the whole description it seems that during development time (and in production also I suppose) all 3 projects are really tight coupled. So I'll use the first scenario for convenience of work. I'll also build all the projects together, tagged them together and keep same versioning pattern - in general even if it's separated this is a single project.
The first scenario can be carried on till no 3rd party (external) application/library/team uses any of the projects (I suppose it could only be C project). Then backward compatibility issues/pull requests and other may happen and the mentioned projects should be separated. If there's no chance that such situation takes place You shouldn't bother too much. But remember about good (semantic) versioning, and tagging the repository to be always sure which version is deployed to which environment.
UPDATE
After question update.
If You've dependency paths like A->B->C and D->B->C I'll reorganize the project structure. B should become a subproject of both A and D or maybe not a subproject but an external library that is added to these projects. B and C should be a separate project with B dependent on C.
All three projects (A,D, B with C) should be separately versioned (especially B with C) because this is a common part for clients (A and D).
Now, for a convenient development You can add to A and D snapshot libs that will be built often on CI server and then update to artifactory. This will allow You to introduce changes to B and have them visible fast in the projects A and D. But stable release of B (and hence C) should be maintained totally separately.
I hope I understood the problem well and helped You a bit.
P.S. Please consider also using gitflow - a branching model. Then You can used SNAPSHOT versions in dev branch and a stable version of B in release.

How to manage multiple versions of binary dependencies in TFS 2012?

I'm managing release process for couple of projects that target external API. Typical scenario is that a single solution targets a particular version, say v1, of 3rd party runtime in a production and newer version (v2) in development phase. I have to maintain dependencies for v1 for production support but also v2 for a DEV branch. Those scenarios may even go more complex depends on the rollout plan.
I tried branching + nuget but the problem is API I use is huge and it is hard to build a scope of a nuget package. Putting everything into one package makes no sense for smaller projects and on the other hand depending on what features we integrate, combination of DLLs may vary a lot and they are not nicely separated into closed concerns.
On top of it, usually we have multiple solutions that use those APIs.
I was thinking about building API version repository in TFS in some form
- myAPI
|- v1
|- v2
|- v3
Is there a way to configure a build process to look inside a server for referenced DLL files depending on a build setup? I can maintain multiple builds in the system obviously but I don't know how to provide referenced files location for each individual build.

Good Directory Layout for .NET Projects with libraries used across applications and using Mercurial

I've been using Mercurial for a bunch of standalone projects. But now I'm looking at converting a subversion repository to Mercurial thats a lot more busy / complicated.
Given about 40 Library projects and about 20 Applications ( various web / console / wpf, etc) or so. Various apps make use of various Libs. All of this is structured under 1 trunk in subversion. So there's a directory where all the libs live, and a directory where all the apps live. Very easy to find and reference the libs when creating a new Visual Studio Projects.
simplified....
--trunk-|-- libs
|-- apps
Now moving to mercurial, this is less ideal, it seems the way to handle this is with 1 repository for each app? and sub repositories per each lib you want to use?
--app repository-|-- libs
|-- app
Is this right?
If so, when starting a new application in visual studio and you want to add various libs, whats the best/most efficient way to go about it?
I'm getting the feeling the initial setup is a bit painful? As opposed to the subversion layout where effectively you don't really have to do anything other than reference the library in your visual studio project.
So, hence this question, wanting to know a good directory structure, and how to quickly setup a new project using this structure.
Ideally, and this is going to be based on my own opinion and experience in working with larger, distinct applications, but with dependencies, you want to have a repository per distinct, unrelated project, and keep related, possibly dependent projects within the same repo. I'm not a big fan of Subrepositories, but that might just be to lack of exposure.
The reason for this is that you should want to version related projects together as changing one may affect the other. In reality, anything that can be pulled into a single solution and have project references, you definitely want to keep together.
Now, there are some exceptions where you may have a library project that you can't necessarily have as part of a solution, but is a reference for a set of projects. This is where I'd keep a lib folder versioned along side the rest of my applications in the same repo, but the lib folder holds pre-build assemblies. It can also hold 3rd party vendor assemblies as well. This is also important to be versioned along with the project that uses them as you can treat a library update for the main project as a minor release.
For other projects that are truly independent, create another repository for it, as it will have its own version life and you do not want changes to it to affect the graph of changes for your other, completely unrelated projects.
Example layout with several related projects and lib folder:
[-] Big Product Repo
--[-] Big Product 1
----[+] Dal
----[+] Services
----[-] Web
------[+] Controllers
------[+] Models
------[+] Views
--[+] Big Product 2
--[-] lib
----[+] iTextSharp
----[+] nHibernate
Example layout with another unrelated project in it (for sake of argument, a Windows services project):
[-] Small Product Repo
--[-] Windows Services
----[+] Emailer
----[+] Task Runner
In reality, though, your folder structure isn't as important as making sure projects that are being treated as one logical unit (a product) are kept together to ensure control over what is built and released. That is my definition of what a repository should contain and what I use to think about how to split things up if there's more than one versionable product.

Resources