Should you build components every time you build a main app - vb6

We have started using Final Builder to create builds for our vb6 and .net projects. We are also using Visual Source Safe to manage our source. Some of our vb6 exe's are dependent on certain ocx's, such that a particular vb6 exe may require a particular version of an ocx.
The question is, should the final builder script for our exe project also re-build the ocx project, or is it better to simply pull a particular version of the already compiled ocx. My concern is that other developers could have broken the build (or created a bug) for the ocx which could then break the exe we are trying to build. Moreover, re-building the ocx project would result in the same version of the ocx but with a different date, resulting in confusion if dllhell(ocx hell) issues arise.

There is no difference in terms of building and maintaining your app between a ocx and a activex dll. The ocx should use binary compatibility and be part of your compile process.
This is however a general rule. You may have some components that rarely change if ever. In my own VB6 application I have a handful of components that reside at the bottomost level of my reference hierarchy that rarely get updated. They maybe get updated one or twice a year at best. Some haven't been updated for several years now.
However based on your description it sounds like the controls are still being modified. So I doubt the second case applies.
In the end use your best judgment.

There are two ways to use OCX/DLLs: code reusability vs. fragmentation of an over-large project.
Those meant for re-use would be absurd to build, build, and rebuild, and almost never should be customized to fit a new application. These are your crown jewels, and most people should have no ability to modify the source. They are the domain of your organization's "library writers" because that's what they are: libraries.
If you simply have large, monolithic, unweildy applications you may have to go the other route. Then OCXs and DLLs simply become an awkward extension of the "module" concept. This is why we have Project Groups.
Your library users should not be fiddling with libraries though. I'm sure they all fancy themselves able to "ensure they are up to date and performant" but that's a different debate entirely.

Related

Can I prevent VB6 OCX controls generating OCA registry entries on compilation?

We have a project here written in VB6 (yes, I know...) which consumes some ActiveX objects also written in VB6 (OCX files).
Recently we started using a build server for this project and, with the help of the MSBuild Extension Pack we devised a build process which compiles the libraries, controls and executable all in the correct order, registers them as appropriate and then unregisters everything to leave a clean machine for the next run.
The problem is that each compilation run leaves entries in the registry for the extended type library/object cache files which the VB6 compiler appears to require, and these entries are not removed when the controls are unregistered.
i.e. for every registered component which has a CLSID entry pointing to the OCX, there exists, after we compile the project with VB6, a newly generated CLSID for the OCA.
Because we unregister the OCX controls after each build and the OCA CLSIDs are generated afresh each time, the number of OCA entries continues to grow.
Does anyone know what causes these entries, whether we can remove them through some kind of unregistration process or whether we can prevent them being created in the first place?
A few notes :
The build machine is Windows 7
Binary compatibility is enabled, so the CLSIDs of the OCX controls aren't changing.
Reg-Free COM isn't currently an option
We cannot simply register the controls and leave them as we want to run more than one build of this project.
The key is to stop trying to treat DLLs (including OCXs) like statically linked libraries. Your build process is inappropriate.
While VB6 supported a crude "project group" capability, this was only meant for small scale unstructured development efforts like those used by under-the-radar business unit coders.
Instead strive to keep application-centric logic within your EXE projects. Factor out longer-term reusable logic into DLL and OCX projects that you treat as separate, unique internal products. Maintain these (including source control) compeletely separately from consuming applications.
Normally these should be far more stable than application code where changes are driven more frequently by the business logic they contain.
Even when some of these must contain business logic, you still want to treat them as separate software entities. Try to keep them distinct from libraries that have more stability.
This requires more discipline than monolithic "tower of Bable" development but there are many rewards. Build time is quicker for one thing. And as a bonus almost all of these source control related issues disappear for the applications which have a much higher rate of tinkering and fiddling.

Using my own class library - C#

I'm developing two projects at once - a class library with classes for things I commonly want in my applications, and an application that uses it. Since I want the library to be easily re-usable by other applications (and virtually stand-alone, even if it wouldn't actually do anything on its own), I have placed the library and application in separate solutions. However, although the dependence is one-way, they grow together.
I usually work on these project with multiple instances of Visual Studio open - one for the library, one for the application, and sometimes one for a scrap project where fool around to try new things.
I'd like to have it so that if I first build the library (perhaps requiring a "Release" switch) and then build the application, the latest changes from the library are available in the dll:s imported by the app.
What is a good way to set this up? Can it be done with e.g. NuGet - and if so, how?
(If it matters, I'm currently using the default settings for everything - basically two solutions created with "file->new". I'd like to change as little as possible of that, to lessen the threshold to import the library in my next application.)

Including MS C++ runtime in VS2005 generated MSI

I've got a project that depends on a particular version of MSVCR80.dll (the MS Visual C Runtime) and I'm running into problems where, depending on the particular system configuration, my app doesn't always get the right version of that file. It's been a bit of a crap shoot as to what path it takes to find a file with that name, and it's not always right...
Is there a way, when creating a Deployment Project in VS2005, to ensure that my app will always use the runtime that I provided?? When I add the runtime file to the project, it asks about creating a merge module...but not really sure what that does. And regardless of creating one, the issue remains.
Martin Richter wrote an article about that on CodeProject:
Create projects easily with private MFC, ATL and CRT assemblies
This solution does not rely on your MSI packages but on the application that uses the CRT files.
I am not sure if it is your application after installation that doesn't work, or if it is a dll you use as part of the installation that doesn't work?
To make a very long story, very short: new versions of the C / C++ runtimes are installed as Win32 assemblies, or side-by-side installation. This means the files will go into folders under C:\Windows\winsxs - the Win32 equivalent of the GAC, and several versions of the same file can co-exist here.
Applications compiled with Visual Studio 2005 / 2008 will put a manifest file into the binary, and this manifest specifies what side-by-side runtime version to bind to. It doesn't matter if you put the MSVCR80.dll next to your EXE or even in system32 - the manifest embedded in the EXE will load the file from C:\Windows\winsxs.
This is all "full circle". In the old days runtimes went to System32. This caused the original dll-hell: applications overwriting each other's global runtime files. To remedy all this the idea was to "isolate changes" to each application. Hence the new approach was to isolate a local copy of the runtime file next to the EXE. Now this caused an entirely new problem: how do you make sure security updates for the isolated dll was deployed? In most cases this never happened, and you had lots of applications running with local, unsafe dll's. So what to do? The decision was to introduce the second coming of dll-hell: the side-by-side assembly approach. In this approach runtimes are not local, but global - with the critical difference of supporting side-by-side installations. This way, in theory, applications can function without overwriting each other's runtime dlls.
So that was the quick summary of "how to make runtime deployment complicated". I am not positive it is still possible to do, but did you check whether you can statically link to the runtime? Sometimes old-school really is easier...

Developing in VB 6.0

We have multiple projects in VB 6.0. Most of these projects are ActiveX DLL's. When developing, projects take a '.dll' reference of other projects, but this does not allow us to debug. So, for this we have to take a reference to the '.vbp' project. However, taking a project reference, means we are asked for binary compatibility.
During development, should we use project compatibility and build projects into DLL's for deployment?
It's fine to reference the vbp during development, just make sure you keep binary compatibility on. If you do not, you'll make the registry into a nice mess and deployment will be a disaster. Keep in mind, however, even with binary compatibility on, every time you change the public interface of the DLL, you're creating a forward reference in the OLE entry in the registry.
we have four levels of DLLS in the CAD/CAM software we use for my company's cutting machines. We handled this by making a compatibility directory that the PREVIOUS version's DLLs in it. With this we can continue to use binary compatibility.
The process looks like this.
Compatibility has Revision 119 DLLs
in it.
We compile down Revision 120 and
release it
Copy the Revision 120 DLLs to the
compatibility directory.
Develop
Test
We Compile down Revision 121 and
release it.
Copy the Revision 121 DLLs to the
compatibility directory.
[repeat]
The main problem you need to watch out for is changes to the the lowest level of DLLs you use. Visual Basic 6 uses a #include statement in generating its internal type libraries. Doing will need to getting it confused over whether it is still binary compatible or not. Note you can see this by using the OLE View tool that comes with Visual Studio 6.
The solution to this problem is compile the low level DLL and immediately put it into the compatibility directory. The resulting internal typelibs for the higher level DLLs will now properly detect whether you are binary compatible or not.
Remember binary compatibile means all you can do is add a method or property. You can't change a existing method's name or argument list. (it's signature in COM terms)
You should be able to debug referencing a dll. Did you start the projects in the right order? Or you can add all/some dll into the same "Project Group" (*.vbg).

Structuring projects & dependencies of large winforms applications in C#

UPDATE:
This is one of my most-visited questions, and yet I still haven't really found a satisfactory solution for my project. One idea I read in an answer to another question is to create a tool which can build solutions 'on the fly' for projects that you pick from a list. I have yet to try that though.
How do you structure a very large application?
Multiple smallish projects/assemblies in one big solution?
A few big projects?
One solution per project?
And how do you manage dependencies in the case where you don't have one solution.
Note: I'm looking for advice based on experience, not answers you found on Google (I can do that myself).
I'm currently working on an application which has upward of 80 dlls, each in its own solution. Managing the dependencies is almost a full time job. There is a custom in-house 'source control' with added functionality for copying dependency dlls all over the place. Seems like a sub-optimum solution to me, but is there a better way? Working on a solution with 80 projects would be pretty rough in practice, I fear.
(Context: winforms, not web)
EDIT: (If you think this is a different question, leave me a comment)
It seems to me that there are interdependencies between:
Project/Solution structure for an application
Folder/File structure
Branch structure for source control (if you use branching)
But I have great difficulty separating these out to consider them individually, if that is even possible.
I have asked another related question here.
Source Control
We have 20 or 30 projects being built into 4 or 5 discrete solutions. We are using Subversion for SCM.
1) We have one tree in SVN containing all the projects organised logically by namespace and project name. There is a .sln at the root that will build them all, but that is not a requirement.
2) For each actual solution we have a new trunks folder in SVN with SVN:External references to all the required projects so that they get updated from their locations under the main tree.
3) In each solution is the .sln file plus a few other required files, plus any code that is unique to that solution and not shared across solutions.
Having many smaller projects is a bit of a pain at times (for example the TortoiseSVN update messages get messy with all those external links) but does have the huge advantage that dependancies are not allowed to be circular, so our UI projects depend on the BO projects but the BO projects cannot reference the UI (and nor should they!).
Architecture
We have completely switched over to using MS SCSF and CAB enterprise pattern to manage the way our various projects combine and interact in a Win Forms interface. I am unsure if you have the same problems (multiple modules need to share space in a common forms environment) but if you do then this may well bring some sanity and convention to how you architect and assemble your solutions.
I mention that because SCSF tends to merge BO and UI type functions into the same module, whereas previously we maintained a strict 3 level policy:
FW - Framework code. Code whose function relates to software concerns.
BO - Business Objects. Code whose function relates to problem domain concerns.
UI - Code which relates to the UI.
In that scenario dependancies are strictly UI -> BO -> FW
We have found that we can maintain that structure even while using SCSF generated modules so all is good in the world :-)
To manage dependencies, whatever the number of assemblies/namespaces/projects you have, you can have a glance at the tool NDepend.
Personnaly, I foster few large projects, within one or several solutions if needed. I wrote about my motivations to do so here: Benefit from the C# and VB.NET compilers perf
I think it's quite important that you have a solution that contains all your 80 projects, even if most developers use other solutions most of the time. In my experience, I tend to work with one large solution, but to avoid the pain of rebuilding all the projects each time I hit F5, I go to Solution Explorer, right-click on the projects I'm not interested in right now, and do "Unload Project". That way, the project stays in the solution but it doesn't cost me anything.
Having said that, 80 is a large number. Depending on how well those 80 break down into dicrete subsystems, I might also create other solution files that each contain a meaningful subset. That would save me the effort of lots of right-click/Unload operations. Nevertheless, the fact that you'd have one big solution means there's always a definitive view of their inter-dependencies.
In all the source control systems that I've worked with, their VS integration chooses to put the .sln file in source control, and many don't work properly unless that .sln file is in source control. I find that intriguing, since the .sln file used to be considered a personal thing, rather than a project-wide thing. I think the only kind of .sln file that definitely merits source control is the "one-big-solution" that contains all projects. You can use it for automated builds, for example. As I said, individuals might create their own solutions for convenience, and I'm not against those going into source control, but they're more meaningful to individuals than to the project.
I think the best solution is to break it in to smaller solutions. At the company I currently work for, we have the same problem; 80 projects++ in on solution. What we have done, is to split into several smaller solutions with projects belonging together. Dependent dll's from other projects are built and linked in to the project and checked in to the source control system together with the project. It uses more disk space, but disk is cheap. Doing it this way, we can stay with version 1 of a project until upgrading to version 1.5 is absolutely necessary. You still have the job with adding dll's when deciding to upgrade to a other version of the dll though. There is a project on google code called TreeFrog that shows how to structure the solution and development tree. It doesn't contain mush documentation yet, but I guess you can get a idea of how to do it by looking at the structure.
A method that i've seen work well is having one big solution which contains all the projects, for allowing a project wide build to be tested (No one really used this to build on though as it was too big.), and then having smaller projects for developers to use which had various related projects grouped together.
These did have depencies on other projects but, unless the interfaces changed, or they needed to update the version of the dll they were using, they could continue to use the smaller projects without worrying about everything else.
Thus they could check-in projects while they were working on them, and then pin them (after changing the version number), when other users should start using them.
Finally once or twice a week or even more frequently the entire solution was rebuild using pinned code only, thus checking if the integration was working correctly, and giving testers a good build to test against.
We often found that huge sections of code didn't change frequently, so it was pointless loading it all the time. (When you're working on the smaller projects.)
Another advantage of using this approach is in certain cases we had pieces of functionality which took months to complete, by using the above approach meant this could continue without interrupting other streams of work.
I guess one key criteria for this is not having lots of cross dependencies all over your solutions, if you do, this approach might not be appropriate, if however the dependencies are more limited, then this might be the way to go.
For a couple of systems I've worked on we had different solutions for different components. Each solution had a common Output folder (with Debug and Release sub-folders)
We used project references within a solution and file references between them. Each project used Reference Paths to locate the assemblies from other solutions. We had to manually edit the .csproj.user files to add a $(Configuration) msbuild variable to the reference paths as VS insists on validating the path.
For builds outside of VS I've written msbuild scripts that recursively identify project dependencies, fetch them from subversion and build them.
I gave up on project references (although your macros sound wonderful) for the following reasons:
It wasn't easy to switch between different solutions where sometimes dependency projects existed and sometimes didn't.
Needed to be able to open the project by itself and build it, and deploy it independently from other projects. If built with project references, this sometimes caused issues with deployment, because a project reference caused it to look for a specific version or higher, or something like that. It limited the mix and match ability to swap in and out different versions of dependencies.
Also, I had projects pointing to different .NET Framework versions, and so a true project reference wasn't always happening anyways.
(FYI, everything I have done is for VB.NET, so not sure if any subtle difference in behavior for C#)
So, I:
I build against any project that is open in the solution, and those that aren't, from a global folder, like C:\GlobalAssemblies
My continuous integration server keeps this up to date on a network share, and I have a batch file to sync anything new to my local folder.
I have another local folder like C:\GlobalAssembliesDebug where each project has a post build step that copies its bin folder's contents to this debug folder, only when in DEBUG mode.
Each project has these two global folders added to their reference paths. (First the C:\GlobalAssembliesDebug, and then C:\GlobalAssemblies). I have to manually add this reference paths to the .vbproj files, because Visual Studio's UI addes them to the .vbprojuser file instead.
I have a pre-build step that, if in RELEASE mode, deletes the contents from C:\GlobalAssembliesDebug.
In any project that is the host project, if there are non dlls that I need to copy (text files outputted to other project's bin folders that I need), then I put a prebuild step on that project to copy them into the host project.
I have to manually specify the project dependencies in the solution properties, to get them to build in the correct order.
So, what this does is:
Allows me to use projects in any solution without messing around with project references.
Visual Studio still lets me step into dependency projects that are open in the solution.
In DEBUG mode, it builds against open loaded projects. So, first it looks to the C:\GlobalAssembliesDebug, then if not there, to C:\GlobalAssemblies
In RELEASE mode, since it deletes everything from C:\GlobalAssembliesDebug, it only looks to C:\GlobalAssemblies. The reason I want this is so that released builds aren't built against anything that was temporarily changed in my solution.
It is easy to load and unload projects without much effort.
Of course, it isn't perfect. The debugging experience is not as nice as a project reference. (Can't do things like "go to definition" and have it work right), and some other little quirky things.
Anyways, that's where I am on my attempt to make things work for the best for us.
We have one gigantic solution on the source control, on the main branch.
But, every developer/team working on the smaller part of the project, has its own branch which contains one solution with only few projects which are needed. In that way, that solution is small enough to be easily maintenaced, and do not influence on the other projects/dlls in the larger solution.
However, there is one condition for this: there shouldn't be too much interconnected projects within solution.
OK, having digested this information, and also answers to this question about project references, I'm currently working with this configuration, which seems to 'work for me':
One big solution, containing the application project and all the dependency assembly projects
I've kept all project references, with some extra tweaking of manual dependencies (right click on project) for some dynamically instantiated assemblies.
I've got three Solution folders (_Working, Synchronised and Xternal) - given that my source control isn't integrated with VS (sob), this allows me to quickly drag and drop projects between _Working and Synchronised so I don't lose track of changes. The XTernal folder is for assemblies that 'belong' to colleagues.
I've created myself a 'WorkingSetOnly' configuration (last option in Debug/Release drop-down), which allows me to limit the projects which are rebuilt on F5/F6.
As far as disk is concerned, I have all my projects folders in just one of a few folders (so just one level of categorisation above projects)
All projects build (dll, pdb & xml) to the same output folder, and have the same folder as a reference path. (And all references are set to Don't copy) - this leaves me the choice of dropping a project from my solution and easily switching to file reference (I've got a macro for that).
At the same level as my 'Projects' folder, I have a 'Solutions' folder, where I maintain individual solutions for some assemblies - together with Test code (for example) and documentation/design etc specific to the assembly.
This configuration seems to be working ok for me at the moment, but the big test will be trying to sell it to my colleagues, and seeing if it will fly as a team setup.
Currently unresolved drawbacks:
I still have a problem with the individual assembly solutions, as I don't always want to include all the dependent projects. This creates a conflict with the 'master' solution. I've worked around this with (again) a macro which converts broken project references to file references, and restores file references to project references if the project is added back.
There's unfortunately no way (that I've found so far) of linking Build Configuration to Solution Folders - it would be useful to be able to say 'build everything in this folder' - as it stands, I have to update this by hand (painful, and easy to forget). (You can right click on a Solution Folder to build, but that doesn't handle the F5 scenario)
There is a (minor) bug in the Solution folder implementation which means that when you re-open a solution, the projects are shown in the order they were added, and not in alphabetical order. (I've opened a bug with MS, apparently now corrected, but I guess for VS2010)
I had to uninstall the CodeRushXPress add-in, because it was choking on all that code, but this was before having modified the build config, so I'm going to give it another try.
Summary - things I didn't know before asking this question which have proved useful:
Use of solution folders to organise solutions without messing with disk
Creation of build configurations to exclude some projects
Being able to manually define dependencies between projects, even if they are using file references
This is my most popular question, so I hope this answer helps readers. I'm still very interested in further feedback from other users.

Resources