Why we need a package manager like Nuget? - visual-studio

I know Package Manager like NuGet help us when we want to use third party components.
From Nuget Codeplex Page:
NuGet is a free, open source developer focused package management
system for the .NET platform intent on simplifying the process of
incorporating third party libraries into a .NET application during
development.
There are a large number of useful 3rd party open source libraries out
there for the .NET platform, but for those not familiar with the OSS
ecosystem, it can be a pain to pull these libraries into a project.
Let’s take ELMAH as an example. It’s a fine error logging utility
which has no dependencies on other libraries, but is still a challenge
to integrate into a project. These are the steps it takes:
Find ELMAH
Download the correct zip package.
“Unblock” the package.
Verify its hash against the one provided by the hosting environment.
Unzip the package contents into a specific location in the solution.
Add an assembly reference to the assembly.
Update web.config with the correct settings which a developer needs to search for.
And this is for a library that has no dependencies. Imagine doing this
for NHibernate.Linq which has multiple dependencies each needing
similar steps. We can do much better!
NuGet automates all these common and tedious tasks for a package as
well as its dependencies. It removes nearly all of the challenges of
incorporating a third party open source library into a project’s
source tree
these steps are simple tasks that we do when we want to setup a project. its only for automation of adding 3rd party components and decrees chance of Error in configuration files? or it has much more responsibilities !?

It's value is hidden in the open: a package manager such as NuGet helps you dealing with software dependencies using automation. Many make the assumption that it's only meant for open source or third party components, but you could equally as well use it for your own internal packages.
The great thing about NuGet is (to name a few benefits):
NuGet encourages reuse of components because you implicitly rely on actual "releases" (even if pre-release), instead of branching sources
you can get rid of binaries bloating your VCS repositories (package restore feature)
it forces package creators to think about the way the package will be consumed and leaves them dealing with configuration of the component during package installation (who knows best how to configure the package than the package creators?). Think about ELMAH as an example.
automating package creation and publication on a package repository effectively is a form of continuous delivery (for software components). OctopusDeploy even takes it a step further and enables packaging entire Web sites ready for deployment.
NuGet encourages and sometimes enforces you to follow some ALM best practices. E.g. a package has a version, so you have to think about your versioning strategy (e.g. SemVer.org)
NuGet integrates with SymbolSource.org (which also has a Community edition to set up your own): this allows one to easily debug released packages without having to ship this info all the time
having one or more package repositories makes it easy for the organization to maintain a dependency matrix, or even build an inventory of OSS licenses that are in use by several projects
NuGet notifies you about available package updates
Creating packages makes people think about component architecture (all dependencies should be packaged as well)
Dependencies of a package are automatically resolved (so you can't forget any)
NuGet is smart enough to add assembly binding redirects when required
The above list is non-exhaustive, but I hope I covered the key benefits in this answer. I'm sure there are more.
Cheers,
Xavier

Reason to use NuGet is you don't have to ship all the libraries in your project, reducing the project size. With NuGet Power Tools, by specifying the package versions in the Packages.config file, you will be able to download all the required libraries the first time you run the project.
Live Exapmle : Reduced project size matters while deployment of project.Like if solution
have 500Mb of code and 200Mb of packages size then extra 200mb really
cost to upload project each time.Instead of uploading concrete
dll files we need to just set their reference in packages.config file.

Related

How to manage stable binaries and avoid risk of CI rebuilds when install packaging?

I am looking for a tool to manage the collection of binary files (input components) that make up a software release. This is a software product and we have released multiple versions each year for the last 20 years. The details and types of files may vary, but this is something many software teams need to manage.
What's a Software Release made of?
A mixture of files go into our software releases, including:
Windows executables/binaries (40 DLLs and 30+ EXE files).
Scripts used by the installer to create a database
API assemblies for various platforms (.NET, ActiveX, and Java)
Documentation files (HTML, PDF, CHM)
Source code for example applications
The full collected files for a single version of the release are about 90MB. Most are built from source code, but some are 3rd party.
Manual Process
Long ago we managed this manually.
When starting each new release the files used to build the last release would be copied to a new folder on a shared drive.
The developers would manually add or update files in this folder (hoping nothing was lost or deleted accidentally).
The software installer script would be compiled using the files in this folder to produce a SETUP.EXE (output).
Iterate steps 2 and 3 during validation & testing until release.
Automatic Process
Some years ago we adopted CI (building our binaries nightly or on-demand).
We resorted to putting 3rd party binaries under version control since they usually don't change as often.
Then we automated the process of collecting & updating files for a release based on the CI build outputs. Finally we were able to automate the construction of our SETUP.EXE.
Remaining Gaps
Great so far, but this leaves us with two problems:
Rebuilding Assemblies The CI mostly builds projects when something has changed, but when forced it will re-compile a binary that doesn't have any code change. The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?).
Latest vs Stable Mostly our CI machine builds the latest versions of each project. In some cases this is ok, but often we want to release an older tested or stable version. To do this we have separate CI projects for the latest and stable builds - this works but is clumsy.
Thanks for your patience if you've got this far :-)
I Still Haven't Found What I'm Looking For
After some time searching for solutions it seems it might be easier to build our own solution, but surely someone else has solved these problems before!?
What we want is a way to store and manage binary files (either outputs from CI, or 3rd party files) such that each is tagged with a version (v1.2.3.4) that allows:
The CI to publish new versions of each binary (but reject rebuilt versions that already exist).
The development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
package name
version
path/destination in the release folder
The Automatic package script to use the recipe collect the required files, and compile the install package (e.g. SETUP.EXE).
I am aware of past debates about storing binaries in a VCS. For now I am looking for a better solution. That approach does not appear ideal for long-term ongoing use (e.g. how to prune old binaries)... amongst other issues.
I have tried some artifact repositories currently available. From my investigation these provide a solution for component/artifact storage and version control. However they do not provide tools for managing a list of components/artifacts to include in a software release.
Does anybody out there know of tools for this?
Have you found a way to get your CI infrastructure to address these remaining issues?
If you're using an artifact repository to solve this problem, how do you manage and automate the process?
This is a very broad topic, but it sounds like you want a release management tool (e.g. BuildMaster, developed by my company Inedo), possibly in conjunction with a package management server like ProGet (which you tagged, and is how I discovered this question).
To address some specific questions you have, I'll associate it with a feature that would solve the problem:
A mixture of files go into our software releases, including...
This is handled in BuildMaster with artifacts. This video gives a basic overview of how they are manually added to releases and deployed to a file system: https://inedo.com/support/tutorials/buildmaster/deployments/deploying-a-simple-web-app-to-iis
Of course, once that works to satisfaction, you can automate the import of artifacts from your existing CI tool, create them from a BuildMaster deployment plan itself, pull them from your package server, whatever. Down the line you can also have your CI tool call the BuildMaster release management API to create a release and automatically have it include all the artifacts and components you want (this is what most of our customers do now, i.e. have a build step in TeamCity create a release from a template).
Rebuilding Assemblies ... The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?)
You can mostly assume they are equivalent functionally, but it's only the times that they are not that problems arise. This is especially true with package managers that do not lock dependencies to specific version numbers (i.e. NuGet, npm). You should be releasing exactly the same binary that was tested in previous environments.
[we want] the development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
This is handled with releases. A developer can choose its name, dates, etc., and associate it with a pipeline (i.e. a set of testing stages that the artifacts are deployed to), then can "click the deploy button" and have the automation do all the work.
Releases are grouped by "application", similar to a project in TeamCity. As a more advanced use case, you can use deployables. Deployables are essentially individual components of an application you include in a release; in your case the "Documentation" could be a deployable, and maybe contain an artifact of the .pdf and .docx files. Deployables from other applications (maybe a different team is responsible for them, or whatever) can then be referenced and "included" in a release, or you can reference ones from a past release.
Hopefully that provides some overview and fits your needs. Getting into this space is a bit overwhelming because there are so many terms, technologies, and methodologies, but my advice is to start simple and then slowly build upon it, e.g.:
deploy a single, manually uploaded component through BuildMaster to a share drive, then manually deploy it from there
add a deployment plan that imports the component
add a second plan and associate it with the 2nd stage that takes the uploaded artifact and deploys it to the target, bypassing the need for the share drive
add more deployment plans and associate them with pipeline stages and promote through them all to "close out" a release
add an agent and deploy to that instead of the default localhost server
add more components and segregate their deployment with deployables
add event listeners to email team members at points in the process
start adding approvals if you require gated "sign-offs"
and so on.

Preventing NuGet package update in team environment

We have a large solution with many projects, and many developers working together.
We need a way to validate Nuget package versions to insure no developer accidentally breaks the build with a package update.
Ideally, is there a way to validate and interupt/stop during Nuget package installation if it is a known incompatible package? We know we can do a validation at build time, but Ideally I'd like to actually be able to stop/inform the developer the newer version is not supported in the build to prevent them from going off and building with the newer package, only at build time to discover they've wasted time on a new package that might have differing calls etc.
If this is not possible, what would be the easiest way to do this at build time? I'm thinking a pre-build script but interested in other ideas. Effectively, the script would look across other project package versions to compare and inform if an incorrect version (and stop a publish to a shared location during build).

What is the recommended way to setup projects like this?

We are working on a large project. The project has multiple external sites and multiple internal sites all stored in Subversion.
The external sites allow a customer to make requests of various things we provide, pay utility bills and more. We decided to break many of these functions apart because most work completely different than the others. So this is one Visual Studio solution with the WebUI and the database layer broken into two projects each. For instance, utility billing has a Utility.WebUI project and a Utility.Domain project. All DB/business logic is kept in the domain project.
The internal sites bridge the gap between the back-office system (IBM i) and the web database. Also will replace/enhance some of our older RPG programs. In theory they should use the exact same database logic that the external sites use because they access the same database right? What is the best way to reference these projects from a different solution? Should I just add a reference to the dll or should I import that project from the external application solution into the internal application solution?
This comes down to that we have two developers working on this project. Myself, I do most of the back-end coding. The other developer does most of the GUI coding. So we need to make sure that this project works on multiple workstations.
Does this make sense? Any thoughts?
Use the svn:externals property to reference the shared project into your project(s).
You have to choose between 1) referencing the directory containing the shared project's source code (i.e. where the csproj and cs files are located) or 2) referencing the directory containing the shared project's build output (assembly / dll).
I normally prefer method 1) since it makes modifications to the shared project's source code easier (you can make changes without having to open the shared project's solution in a second instance of Visual Studio). If you don't intend to make changes to the shared project often then method 2) might be better. It reduces compile time and prevents accidental modifications of the shared project's source code. Both methods are fine - matter of taste.
It is recommended for both methods that you version your shared project. i.e. create tags with version numbers and reference the tags, not the trunk. When a new version of the shared project comes out you can update the svn:externals property of your other project(s) with the new version number, run "svn update" to download the new version of the shared project, and recompile. This works especially well if you have a build server for the shared project that does the tagging for you automatically.
I think you can use a sort of "commons" solution that contains the common projects and then refer to these projects in you main solutions using SVN external pointing to the project folder in the SVN trunk.
Commons SVN repository must follow the suggested repository structure (trunk, branches, tags) to have always stable commons projects.
In this scenario you can consider to use a dependency management tool, such as NPanday or NDepend, where you must declare to which version of which assemblies every project depends on; using these tools you can have a local repository (such as Artifactory or Nexus) of binary assemblies to refer to, or choose to use SVN externals to refer directly to source code.

How to checkin the output of a project/solution (dll) post a successful build in TFS2010

Just wondering on the recommended process of checking in an output of a project or solution post a successful build.
For example the Build relates to a common library. Post a change I want that to be checked in to a known location so other solutions can reference.
Some examples might be
Custom Workflow activities
Invoking TF exe directly
I would not check an output in. Instead, I would move it to a well-known location, probably a file share.
I don't do this currently but plan to investigate NuGet as a solution to this scenario. MSDN has some articles showing how to incorporate NuGet into your projects and host a private gallery of your own NuGet packages. MSDN has examples of a build that compiles your common code and then packages it and updates it into your private NuGet gallery. Then in your projects you would consume the NuGet package of the common library you wish to use.
Main MSDN article describing this:
http://msdn.microsoft.com/en-us/magazine/hh781026.aspx
Other resources:
http://nuget.org/
http://nugetter.codeplex.com/
Have a look at this post from Ewald Hofman, it updates certain files and checks them in using a custom activity. You could use the same process. But this involves customizing the build process template and deploying custom build activities to all build agents.
But you might also want to investigate the free AIT Dependency Manager which can download the latest specific version (can filter on build outcome or quality) of one build from the buildserver as reference to another build (also inside Visual Studio). This is a lot more flexible than constantly checking in the build output and allows you to have your dev branch to always get the latest (unstable) version, but your release branch to always get the latest well tested and approved version.

Managing internal 3rd Party Dependencies

We have a lot of different solutions/projects which are managed by different teams. Our solution needs to reference several projects that another team owns. We don't want to add these dependencies as project references because we do not intend on modifying that code, we just want to use it. Also we already have quite a bit of projects in our solution and don't want to add a bunch more since it will slow down Visual Studio. So we are building these projects in a separate solution and adding them as file references to our solution.
My question is, how do people manage these types of dependencies? Should I just have some automated process what looks for changes to those projects, builds them and checks the dlls into our source control, after which we treat them like other 3rd party dependencies? Is there a recommended way of doing this?
One solution, although it may not necessarily be what you are looking for, is to have each dependent sub-system perform a release. This release could be in the form of a MSI install, or just a network share of assemblies. When a significant change is made, that team could let you know, and you could run the install or a script to copy the files.
Once you got the release, you could put them into the GAC, that way you would not have to worry about copying them to your project bin folders.
Another solution, assuming you are using a build server or continuous integration of some kind, is to have a post build step or process stage the files. Than at any given moment, the developers of the other teams could grab the new files , or have a script or bat file pull them down locally.
EDIT - ANOTHER SOLUTION
It might be best to ask why do you have these dependencies? Do you really need them locally when building your part of the application? Could you mock out the dependencies in your solution, allowing you to code, build, and run unit tests? The the actual application would wire these up in your DEV/Test/Prod environments. Keeping your solution decoupled and dependent free may be a better solution for the individual team. Leave the integration and coupling when the application runs in a real setting.
(Not a complete answer, but still:)
Any delivery is better stored in a file/binary repository, as opposed to a VCS used to manage sources history.
We prefer managing those deliveries in a repo like Nexus, and we are using maven to get back the right dependencies.
Even if those tools can be more Java-oriented, Nexus can store anything, and maven is only there to read the pom.xml of each artifact and compute the right dependencies.

Resources