I have a solution with multiple projects in it, so for example say 10 testing related projects have a dependency on nunit. Currently my solution structure includes folders for Tools and Lib, so maybe the full nunit download is in Tools and just the dll in Lib.
I suppose any package manager (NuGet and OpenWrap being two I'm looking at) needs to create it's own 'known' location for packages. So whereas the old fashioned way of package management, after manually updating my Lib folder, I know every project that had a dependency on nunit just got updated.
But if I update with a package manager, I need to visit each and every project to ensure it is updated and pointing at the same reference, yes? And some dll's may not be found (am thinking unHAddins right now) so you aren't completely liberated from manual package management. Meaning migration to the latest updates isn't done until each and every project is updated by the package manager.
So I'm wondering if my understanding is correct, what the best approach to incorporating package management into a decent sized solution is - for example:
0) add to source control: NuGet 'packages' folder or OpenWrap 'wraps' folder
1) pick a dll (start with one that you beleieve has minimal dependencies)
2) pick a project (ideally with minimal dependencies that might break)
3) if OpenWrap, get the package you want into 'wraps'
4) for each project:
a) add reference to subject dll (manually if OpenWrap, NuGet will add for you)
b) fix compile error as needed
c) run tests
Does that sound right?
Cheers,
Berryl
To answer your questions, no you don't have to do anything with openwrap, all projects import all dependencies within a scope, so the update applies to everything.
I can't answer for the other package managers out there, but in openwrap, you'd add the /wraps folder in source control with the packages that got pulled when you added or updated them. The process would be to first add the package from a remote repository (or create one from your existing assemblies if there's not one available), and remove manually the references from /lib. In OpenWrap, we don't add the references to your csproj, we add them at build time, so if there's already a dependency in /lib, we won't add it. That means you can add all the packages, and remove the references one after the other, running your tests everytime.
Hopefully, this is a temporary problem until all dlls are available as packages, which will happen rather quickly.
Related
There have been many posts on this topic, but I have yet to find the "real" solution.
How does one manage their dependency tree (both compile time and runtime) using MSBuild project files (i.e. Visual Studio project files via project and file references)?
It is well known that project references from child projects will not be copied to an application bin directory if there is no compile time reference, even if there is a runtime dependency, and even if copy-local=true. Hence, any loosely coupled component will not be copied over.
The hack to solve this problem is to include the dependency in the parent project with copy-local=true. However, this basically destroys your dependency tree as you no longer know where the dependency is and ultimately, as your app grows and morphs, you end up with a version of DLL hell. Your parent project ends up with 10s to 100s of dlls, most of which are runtime dependencies of dlls in child projects.
Another hack is to write a custom targets file and call it from every project file: http://blog.alexyakunin.com/2009/09/making-msbuild-visual-studio-to.html. But surely there is a better option. This is such a bread and butter thing. Java devs never have to deal with such trivial issues.
From what I can gather, the Microsoft way to solve this problem is to register every dependency in the GAC for every dev, test and production machine. But this is stupid and annoying. I won't bother giving this option and educated rebuttal.
Avoiding the GAC option, how could one use MSBuild to manage a dependency tree that includes runtime only dependencies? How does Microsoft do it? Surely they don't run custom targets files like the one in the link above.
I hope someone from an enterprise .NET background can step up and offer some real advice on this. Otherwise I'm just going to have to rewrite all my build scripts in NAnt (shudder).
Thanks All.
UPDATE
In response to some comments, the following is a practical example of the issue from my current project.
The app is a Web Application project that exposes a suite of WCF services. It has an external domain DLL containing the external service classes and an internal domain DLL containing internal service POCOs, domain objects and DAOs. There is a separate integration DLL containing interfaces (DTOs) for all the internal domain classes that allows us to completely decouple the external and internal domains. The whole thing is wired up with Spring.net. I hope this is clear, let me know if you need more clarification.
My current build process is to use MSBuild to generate a deployment package for the web application (in TFS Build). So while the whole solution is built initially, only the output from the web application gets packaged. Therefore, the Web Application is treated as the dependency root and I expect that any loosely coupled child references should get copied over on build if they are set to 'copy-always=true'.
So the Web Application contains a reference to the external domain DLL which contains a reference to the internal domain DLL which contains many references to 3rd party libraries and various indirect and loosely coupled dependencies required by the 3rd party libraries.
The problem occurs when there is a 3rd party dependency in the internal domain DLL e.g. oracle.dataaccess which is required by NHibernate at runtime. Even when I set 'copy-always=true' on these DLLs, they do not get copied to the Web App package. The only way I can include them in the package is to add these DLLs to the Web App's references. I don't want to do this because I no longer have a meaningful dependency tree.
I hope this makes the issue clearer. Please let me know if anything is unclear. It's hard to describe this sort of stuff.
If anyone is also having a similar issue, please speak up and share your experience.
I really want to give you a better answer but unfortunately you didn't put enough information about your solution/projects and your dependencies, so I will try to give you several ideas and I hope one of them works.
The easiest thing to do as you said is to set up a separate folder with all of your dependencies and create target file that will copy them to your bin folder. If you have dependencies that are not changing frequently that might work. If another team from your company is building them and they change frequently, this approach is not good.
Another simple approach - if you're referencing your dependencies from your solution only you can change the build path, so that they build directly into the bin folder of your main project. This way you don't have to reference them directly.
Use NuGet. You have a separate team producing loosely coupled dependencies it may make a sense to set up local NuGet repository and use it for that http://juristr.com/blog/2012/04/using-nuget-to-distribute-our-company/
I hope that helps.
Suppose that a large project is split into multiple projects, each housed in an individual Mercurial repository (as per What's a good way to organize projects with shared dependencies in Mercurial?).
Suppose also, that a dependency manager is being used internally (we're using NuGet, but the same could apply to Maven) so that:
ProjectA depends on Ninject and MongoDB
ProjectB depends on ProjectA, and log4net
Projects A and B can be built independently; NuGet automatically downloads both OSS and internal dependencies from a NuGet server (ProGet in this case).
Suppose finally, that ProjectB depends on v1.2.3.4-SNAPSHOT of ProjectA, and that a CI server continually updates the ProjectA.1.2.3.4-SNAPSHOT package in the NuGet server. Thereby ProjectB will always be developed against the latest checked in changes of ProjectA.
What if related changes are required in both Project A and B? What neat and clever ways are there to do this properly? Some ideas:
Developer checks out Project A and B. Changes are made to A, built, and checked in. Developer waits for CI server to build and update the NuGet server. Changes are made to B, built, and checked in. (I dislike this as code is being checked in as part of development process.)
Developer checks out Project A and B, and rewires B to use A source as a dependency (instead of NuGet package ProjectA). Changes are done to both A and B. Check in is performed for both A and B together after proper testing, but developer must ensure dependency changes are not checked in.
I'm not particularly good at this, so I think that someone will blow my ideas out of the water with something quite clever.
I don't know about how NuGet does it, but with Maven, your second idea works fine, apart from 'rewires B to use A source as a dependency' being unnecessary. You would just build A locally (using install) and it would be installed to the your local Maven repo. Then when building B, it will pick up the newly built A, rather than the one from the central repo.
I could think of the following with nuget :
In Project A drop the packages at a central location (in this example I am placing it in c:\localpackages)
Build A with
MSBUILD.exe /t:Build,Package A.csproj
With Project B you could add a .nuget\nuget.config that specifies
(you can read more about specifying package folder location here http://docs.nuget.org/docs/release-notes/nuget-2.1)
This should pick the changes made by project A. It might become tricky when you have different versions of nuget package dropped for A
Hope this helps.
My project have the references to the bunch of projects from another solution (this solution also linked to own Git repository), but for convinience (to be able debug and modify them from one solution) I include these projects to my web-project solution which I want to deploy on AppHarbor. Maybe it could be said that these are sub-modules of my solution. But now I can't figure out proper way to deploy the solution on AppHarbor.
More structured description:
--Solution
------DeployedProject
------[SolutionFolderForExternalProjects]
---------Proj1ReferencedFromDeployedProject
----------Proj2ReferencedFromDeployedProject
Solution - linked to repo1
Proj1 and Proj2 - also belong external solution which linked to repo2, still ADDED to the repo1 explecitly -
git add SolutionFolderForExternalProjects/
How I should handle this sort of deployment?
AppHarbor really needs all dependencies pushed for us to successfully build you project. Generally, having one solution reference projects in some other random location on your local drive and checked into a different repository is probably not an optimal model. It's also bound to cause problems if some other person has to check out and build your code.
You should consider either combining the two solution structures into one repository (you can still have multiple solution files, see the AppHarbor solution file convention). Alternatively, package the respective dependencies up as NuGet packages and include them in your project using NuGet.
I have many projects that use a bunch of exact same class.
Is there a way to add a script to Xcode, so, each time i compile, he go to a network folder and update is files from there... If newer. (i do this step manually, but could be great to automate it)
Thanks
You could add a "run script" build phase to copy over files before compiling if that's really what you want to do. That would catch updates for you but I don't think it would help you if new files are added (though copying them into a location your project has a folder reference rather than a group pointing to might work).
That said I think there's a better solution. It sounds like you're reinventing a process for managing project dependencies when you could use existing tools. I would publish those shared classes as a library and add it to each project using CocoaPods and a reference to the library's git repository. That way you just need to run a pod install to get the latest version of your library. A good dependency manager gives you a clear understanding of which version of your dependencies you're currently using, control over when to update them, handles installing dependencies of your dependencies, and will avoid link errors from multiple static libraries attempting to each include a copy of the same common dependency.
I have a project which is under source control using TFS. Actually, I have 2 solution in one TFS Collection. suppose the first solution is called SolutionA, while the second SolutionB. Each solution has it's own project in TFS. Now the problem I have is that, one of SolutionB's project should reference an assembly which is build in SolutionA. So what's best practices to achieve this?
Thanks
You have SolutionA that contains ProjectA, and SolutionB that contains ProjectB:
The two easiest approaches you can use for referencing ProjectA from ProjectB are:
Simply add ProjectA to SolutionB, and then ProjectB can use a project-reference to ProjectA. This means that you share the source code for ProjectA and make an independent build of it from within SolutionB as well as SolutionA. This will slightly slow down your SolutionB build (as you now always build ProjectA in it), but will allow you to make edits to the source code for ProjectA, and treat it as a normal part of SolutionB.
Build SolutionA and use a post-build step (or redirect the output path) to save the resulting ProjectA assembly (and its pdb and xml files, if you want to be able to debug into it) into a shared folder (e.g. C:\Libraries). Then use a file-reference from ProjectB to C:\Libraries\ProjectA.dll. This keeps your SolutionB build fast, and removes the need to have the ProjectA source code lying around, but means that any changes to ProjectA require a double build (first SolutionA to create the .dll and then SolutionB to pick up the changes to the .dll). (You can also opt to check in C:\Libraries to source control so another team could just provide a pre-built binary for ProjectB rather than you having to have anything to do with SolutionA yourself)
To clarify what I mean by Project-references and File-references: In your Solution explorer, right-click on the project's References folder and choose "Add Reference...". In the dialog box that appears, you can choose the tab "Projects" to list the projects in your Solution, and reference one of them (a project-reference). Or choose the "Browse" tab to browse to find a pre-built assembly .dll file (a file-reference)
(You could also install the assembly from ProjectA into the GAC, and then use the add reference dialog to reference it from the ".NET" tab, but IMHO this is a more complicated approach to use as you have more mess to clean up to remove the dll from your system)
There are a couple of options.
If the same team manages both solutions, I would highly recommend just putting them both in the same team project, Or just sharing the same source repository between both projects.
If they are managed by different teams, it might make sense to just give solution b a binary copy of solution a, and update it when a does a release.
If neither of those work, you could add a custom msbuild script in solution b which will get the latest version of solution a from source and build it, before building b. Something like this
My personal opinion is that team projects tend to get over used. I like to just have one team project per team and put all code in the repository there.
Guys, I found a better solution. When I create SolutionB, then I just add ProjectA in SolutionB without branching. To do that, just click File -> Source Control -> Add Project From Source Control.
Voila :-)