MVC Web App Feature Development Development / production strategies? - model-view-controller

There is a web app. Let's say I want to add a feature. I can write some code, test it locally, make sure it works - then publish it so it is available to the public. Some features though are very complex and not that easy to be written, tested and shipped the same way.
I want to make it so certain feature I am currently working on is not available to the public even though I publish the app.
Let's say I want to add a custom breadcrumb feature to the app (just for one page to keep it simple). I can write a block of code surrounded by some IsProductionReady variable maintained somewhere in Config file - then once I am done I can set IsProductionReady to True - so now it shows up.
I also want to be able to switch to any other features / changes and publish them without affecting any code, without showing any signs of Breadcrumb feature development. When I am done with the feature I want to be able to just make so it is available to the public.
What are the best practices or strategies to maintain a certain state of a feature? What is the best way to structure it?

If you're using Git, it's better to have a separate branch for each new feature, then after the branch being tested and approved you can merge them into your main develop branch, run another regression test (because different features may interfere each others functionality) and then move it to the Production branch.
Take a look into these urls, I presume you can find your desired scenarios in them :
http://nvie.com/posts/a-successful-git-branching-model/
http://martinfowler.com/bliki/FeatureBranch.html
https://www.atlassian.com/git/tutorials/comparing-workflows/feature-branch-workflow

I would have separate branches on Github between both and keep the structure the same. When your feature is ready, merge to the production branch.

Related

How to keep Sitecore database consistent?

We have 5 environments - Development, UAT, Staging, Live and DR.
Having more than 100 content editors, makes the Live Sitecore database content grow faster.
So almost every fortnight the content tree is out of sync with Development and UAT environment. When we try to develop new things, it is out dated content and sometimes new functionality breaks the live environment.
Please can anyone suggest an ideal way of keeping all the Sitecore databases in sync apart from creating packages and updating regularly so that we can follow a proper CI?
RAZL is not a solution that you should rely on for Continuous Integration, it's merely a database comparison tool.
Setting up proper CI for Sitecore is exactly what I'm doing for my current project and this is what we came up with:
TDS:
If you are willing to spend money, then take a look at TDS (Team Development for Sitecore).
It integrates with Visual Studio and provides you with tools for serialization of Sitecore items which you can then store in your source control.
A build server would then be able to pick up any changes in those serialized files and deploy them to your Test, Staging and even Production environment.
Alternative:
A free alternative to this is to use a combination of three open source modules:
Unicorn (for automatic serialization of your changes to Sitecore
items)
Courier (for package generation based on serialized items)
Sitecore Ship (for automated deployment of Sitecore packages)
I'm working with the free alternative myself at the moment and it works great.
Have you come across RAZL, it is a Sitecore Database Comparison Tool.
This is what they say about Razl:
Razl allows developers to have a complete side by side comparison between two Sitecore databases; highlighting features that are missing or not up to date. Razl allows you to find that one missing template, move it to the correct database.
It is quite incorrect to call Razl 'merely a database comparison tool' - from the first release, you could copy subtrees from one Sitecore database to another.
The initial drawback was that it could not be automated, but with Razl 3.0 (I think it started with Razl 2.4), Razl scripting was added, so you can easily automate Sitecore database syncing between environments.
To see how others use it, see Sean Holmesby's comments:
https://community.sitecore.net/developers/f/8/t/1767
and Nikola Gotsev's comments:
https://sitecorecorner.com/2014/10/27/the-amazing-world-of-razl-part-1/
It is very inexpensive, and with v3.0, it is much more powerful than the initial release, which required manual manipulation via the GUI interface.

Xcode: How to handle many slightly different targets/apps based on the same source code

For one of our company's products we need to generate iOS apps with slight changes (different logos, slightly different settings in the Info.plist, etc.); but basically they are all based on the same source code.
Now that we are starting to get some traction, it becomes slightly annoying to have like 20-30 different schemes and targets in the main Xcode project - plus it's a pain to have colleagues modify it, because it tends to break things now and then.
Unfortunately I'm not very familiar with Xcode's inner guts; but I'm pretty sure someone else already has solved this before.
Some ideas that came to my mind:
Have a separate Xcode project and...
... import the "base code" using a Framework/Library.
... add the "base code" as a project (Dependencies?)
Not sure what the best practice is here; ideally there would be a clear separation of the project code and configuration of customer app targets. More ideally this would be a maintainable by co-workers without the risk of breaking the base code accidently.
Any ideas/thoughts/suggestions?
It depends on the use case. Do you need to release (Archive) the targets for a synchronized deployment? Or are these client customizations that get released independently? How big is the development team?
There are really only a few options either way.
Option 1
Manage the products as separate targets. This is basically what you are doing now. You can set it up so that building one target builds them all to save yourself some agony. The major drawback here is that you are managing images/plist data separately.
This is the way I usually handle it. The customizations are usually one off and you can specify a different precompiled header to alter some functional differences.
Option 2
Manage the products as separate branches in CVS. This can be a bit of a headache to handle but works better if there is a larger team working in the codebase. Keep the functional code on one branch. Maintain an independent branch for each product. Merge changes from the functional branch into the product branches as needed.
Option 3
Mange the products as separate sub-projects. This is very similar to option 1 as you still need to maintain the settings separately but the advantage is that you are less likely to mess up other products when making changes to the underlying xml for the project files.
Factors to consider is the size of your development team as well as the existing workflow of the team.

What is a good solution structure to allow easy customisation of a product on a per client basis?

I am looking for some advice on how to allow easy customisation and extension of a core product on a per client basis. I know it is probably too big a question. However we really need to get some ideas as if we get the setup of this wrong it could cause us problems for years. I don't have a lot of experience in customising and extending existing products.
We have a core product that we usually bespoke on a per client basis. We have recently rewritten the the product in C# 4 with an MVC3 frontend. We have refactored and now have 3 projects that compose the solution:
Core domain project (namespace - projectname.domain.*) - consisting of domain models (for use by EF), domain service interfaces etc (repository interfaces)
Domain infrastructure project (namespace -projectname.infrastructure.*) - that implements the domain service-EF Context, Repository implementation, File upload/download interface implementations etc.
MVC3 (namespace - projectname.web.*)-project that consists of controllers, viewmodels, CSS, content,scripts etc. It also has IOC (Ninject) handling DI for the project.
This solution works fine as a standalone product. Our problem is extending and customising the product on a per client basis. Our clients usually want the core product version given to them very quickly (usually within a couple of days of signing a contract) with branded CSS and styling. However 70% of the clients then want customisations to change the way it functions. Some customisations are small such as additional properties on domain model, viewmodel and view etc. Others are more significant and require entirely new domain models and controllers etc.
Some customisations appear to be useful to all clients, so periodically we would like to change them from being customisations and add them to the core.
We are presently storing the source code in TFS. To start a project we usually manually copy the source into a new Team Project. Change the namespace to reflect the clients name and start customising the basic parts and then deploy to Azure. This obviously results in an entirely duplicated code base and I’m sure isn’t the right way to go about it. I think we probably should be having something that provides the core features and extends/overrides where required. However I am really not sure how to go about this.
So I am looking for any advice on the best project configuration that would allow:
Rapid deployment of the code – so easy to start off a new client to
allow for branding/minor changes
Prevent the need for copying and pasting of code
Use of as much DI as possible to keep it loosely coupled
Allow for bespoking of the code on a
per client basis
The ability to extend the core product in a single
place and have all clients gain that functionality if we get the
latest version of the core and re-deploy
Any help/advice is greatly appreciated. Happy to add more information that anyone thinks will help.
I may not answer to this completly, but here some advices:
Don't copy your code, ever, whatever the reason is.
Don't rename the namespace to identify a given client version. Use the branches and continuous integration for that.
Choose a branching model like the following: a root branch called "Main", then create one branch from Main per major version of your product, then one branch per client. When you develop something, target from the start in which branch you'll develop depending on what you're doing (a client specific feature will go in the client branch, a global version in the version branch or client branch if you want to prototype it at first, etc.)
Try the best to rely on Work Item to track features you develop to know in which branch it's implemented to ease merge across branches.
Targeting the right branch for you dev is the most crucial thing, you don't have to necessary define some hard rules of "what to do in which occasion", but try to be consistant.
I've worked on a big 10 years project with more than 75 versions and what we usually did was:
Next major version: create a new branch from Main, dev Inside
Next minor version: dev in the current major branch, use Labels to mark each minor versions Inside your branch.
Some complex functionnal features was developped in the branch of the client that asked for it, then reversed integrated in the version branch when we succeeded in "unbranded" it.
Bug fixes in client branch, then reported in other branches when needed. (you have to use the Work Item for that or you'll get easily lost).
It's my take on that, other may have different point of view, I relied a lot on the Work Item for traceability of the code, which helped a lot for the delivery and reporting of code.
EDIT
Ok, I add some thought/feedback about branches:
In Software Configuration Management (SCM) you have two features to help you for versionning: branches and labels. Each one is not better nor worst than the other, it depends on what you need:
A Label is used to mark a point in time, using a label, for you to later be able to go back to that point if needed.
A Branch is used to "duplicate" your code to be able to work on two versions at the same time.
So using branches only depends on what you want to be able to do. If you have to work one many different versions (say one per client) at the same time: there's no other way to deal with it than using branches.
To limit the number of branches you have to decide what will be a new branch or what will be marked by a label for: Client Specific Versions, Major Version, Minor Version, Service Pack, etc.
Using branches for Client versions looks to be a no brainer.
Using one branch for each Major version may be the toughest choice for you to make. If you choose to use only one branch for all major versions, then you won't have the flexibility to work on different major versions at the same time, but your number of branches will be the lowest possible.
Finally, Jemery Thompson has a good point when he says that not all your code should be client dependent, there are some libraries (typically the lowest level ones) that shouldn't be customized per client. What we do usually is using a separated branch tree (which is not per client) for Framework, cross-cutting, low level services libraries. Then reference these projects in the per client version projects.
My advice for you is using Nuget for these libraries and create nuget package for them, as it's the best way to define versionned dependencies. Defining a Nuget package is really easy, as well as setting up a local Nuget server.
I just worried that with 30 or 40 versions (most of which aren't that different) branching was adding complexity.
+1 Great question, its more of a business decision you'll have to make:
Do I want a neat code-base where maintenance is easy and features and fixes get rolled out quickly to all our customers
or do I want a plethora of instances of one codebase split up, each with tiny tweaks that is hard (EDIT: unless your a ALM MVP who can "unbrand" things) to merged into a trunk.
I agree with almost everthing #Nockawa mentioned except IMHO dont substitute extending your code architecture with branches.
Definitely use a branch/trunk strategy but as you mentioned too many branches makes it harder to quickly roll-out site wide features and hinder project-wide continuous integration. If you wish to prevent copy/pasting limit the number of branches.
In terms of a coding solution here is what I believe you are looking for:
Modules/Plug-ins, Interfaces and DI is right on target!
Deriving custom classes off base ones (extending the DSL per customer, Assembly.Load())
Custom reporting solution (instead of new pages a lot of custom requests could be reports)
Pages with spreadsheets (hehe I know - but funnily enough it works!)
Great examples of the module/plugin point are CMS's such as DotNetNuke or Kentico. Other idea's could be gained by looking at Facebook's add-in architecture, plugin's for audio and video editing, 3D modeling apps (like 3DMax) and games that let you build your own levels.
The ideal solution would be a admin app that you can choose your
modules (DLL's), tailor the CSS (skin), script the dB, and auto-deploy
the solution upto Azure. To acheive this goal plugin's would make so
much more sense, the codebase wont be split up. Also when an
enhancement is done to a module - you can roll it out to all your
clients.
You could easily do small customisations such as additional properties on domain model, viewmodel and view etc with user controls, derived classes and function overrides.
Do it really generically, say a customer says I want to a label that tally's everyone's age in the system, make a function called int SumOfField(string dBFieldName, string whereClause) and then for that customers site have a label that binds to the function. Then say another customer wants a function to count the number of product purchases by customer, you can re-use it: SumOfField("product.itemCount","CustomerID=1").
More significant changes that require entirely new domain models and controllers etc would fit the plug-in architecture. An example might be a customer needs a second address field, you would tweak your current Address user-control to be a plug-in to any page, it would have settings to know which dB table and fields it can implement its interface to CRUD operations.
If the functionality is customised per client in 30-40 branches
maintainability will become so hard as I get the feeling you wont be
able to merge them together (easily). If there is a chance this will
get really big you dont want to manage 275 branches. However, if its
that specialised you have to go down to the User-Control level for
each client and "users cant design their own pages" then having
Nockawa 's branching strategy for the front-end is perfectly
reasonable.

Visual studio branches

In our company we are developing a normal ASP.Net application.
Now we need to transfer the application to a cloud application that will run under Windows Azure.
So we will have two version of the application
Normal installation on IIS
Runs under Windows Azure
My question is that how to manage the TFS branches. Should I create two TFS branches foreach version and do each change 2 times or is there an alternative way to handle this problem?
Thank you in advance for you help.
We did one of the project like this, where two versions of the application (regular IIS deployment and Azure) have to be maintained in parallel. Although there were substantial differences between the two versions, we used one single code base. This worked out pretty good, I think we would have more problems if we decided to go with branches.
Few hints to make it easier to use one single code base accross legacy and Azure deployments:
1: Differences in behavior in the code is easy to do with dynamic check
if (RoleEnvironment.IsAvailable)
{
// Azure specific code
}
else
{
// normal IIS code
}
Any differences in UI could be done this way by hiding/unhiding elements from the page.
2: Create separate project and solution configurations for a) IIS production deployment, b) IIS debuging, c) Azure production and d) DevFabric. Use web.config transforms to get around any differnces in web.config.
3: For debugging under DevFabric the base version (i.e. non-transformed version) of web.config is used. I found it easier to make your base web.config to be used unmodified for DevFabric environment (i.e. the transform you would create for DevFabric would be empty). This makes debugging under DevFabric easy. The side effect is that it makes it impossible to debug under Callipso. As a workaround for Callipso problem, setup normal IIS on your dev box and use WebDeploy to publish your package built using IIS debug configuration to local IIS instance.
If the differences between the branches are small, consider using conditional compilation to switch between different platforms - this eliminates the need to branch and makes it easy to see when you're working on parts of the code that are branched. Similarly you can use abstract classes with a concrete implementation for each platform, which is a much cleaner approach than using #if on lots of small chunks of code.
If branching, then I'd use one of two approaches: if the differences are isolated, possibly consider refactoring the code to collect the differences into a small area of the codebase, and just branch that bit. Or insert a root level folder and branch there so that absolutely everything is branched.
When you make changes in one branch you will have to merge those changes across to the other branch, which is why I'd try to minimise the scope of the branches, to minimise the need to merge.

Best Practice for Git Repositories with multiple projects in traditional n-tier design

I'm making the switch from a centralized SCM system to GIT. OK, I'll admit which one, it is Visual SourceSafe. So in addition to getting over the learning curve of Git commands and workflow, the biggest issue I'm currently facing is how to migrate our current repository over to Git in regards to single or some flavor of multiple repositories.
I've seen this question asked in a variety of ways, but normally just the basic..."I have applications that want to share some lower level libraries" and the canned response is always "use separate repositories" and/or "use Git submodules" without much explanation of when/why this pattern should be used (what does it overcome, what does it eliminate?) From my limited knowledge/reading on Git so far, it appears that submodules may have their own demons to battle, especially for someone new to Git.
However, what I've yet to see someone blatantly ask is, "When you have the traditional n-tier development (UI, Business, Data, and then Shared Tools) where each layer is its own project, should you use one or multiple repositories?" It is not clear to me because almost always, when a new 'feature' is added, code changes ripple through each layer.
To complicate matters with respect to Git, we've duplicated these layers across 'frameworks' to make more manageable projects/components from a developer's perspective. For the purpose of this discussion, lets call these collection of projects/layers 'Tahiti', which represents an entire 'product'.
The final 'layer' in our set up is the addition of client websites/projects which customize/expand upon Tahiti. Representing this in a folder structure might best look like:
/Clients
/Client1
/Client2
/UI Layer
/CoreWebsite (views/models/etc)
/WebsiteHelper (contains 'web' helpers appropriate for any website)
/Tahiti.WebsiteHelpers (contains 'web' helpers only appropriate for Tahiti sites)
/BusinessLayer (logic projects for different 'frameworks')
/Framework1.Business
/Framework2.Business
/DataLayer
/Framework1.Data
/Framework2.Data
/Core (projects that are shared tools useable by any project/layer)
/SharedLib1
/SharedLib2
After explaining how we've expanded on the traditional n-tier design with multiple projects, I'm looking for any experience on what decision you've made with a similar situation (even the simple UI, Business, Data separation was all that you used) and what was easier/harder because of your decision. Am I right in my preliminary reading on how submodules can be a bit of pain? More pain than is worth the benefit?
My gut reaction is to one repository for Tahiti (all projects excluding the 'client projects'), then one repository for each client. The entire Tahiti source I'm guessing has to be <10k files. Here is my reasoning (and I welcome criticism)
It seems to me, that in Git you want to track history of 'features' vs individual 'projects/files', and even with our project separation, a 'feature' will always span multiple projects.
A 'feature' coded in the core site will almost always minimally effect the core website and all projects for a 'framework' (i.e. CoreWebsite, Framework1.Business, Framework1.Data)
A feature can easily span multiple frameworks (I'd say 10% of the features we implement would span frameworks - CoreWebsite, Framework1.Business, Framework1.Data, Framework2.Business, Framework2.Data)
In a similar fashion, a feature could require changes to 1 or more SharedLib projects and/or the 'UI website helper' projects.
Changes to client's custom code will almost always only be local to their repository and not require tracking changes to other components to see what the 'entire feature change set' was.
Given that a feature spans projects to see the entire scope, if each project was its own repository, it seems it would be a pain to try to analyze *all* code changes across repositories?
Thanks in advance.
The reason most people advise to do separate repositories is because it separates out changes and change sets. If someone makes a change to the client projects (which you say doesn't really effect others), there is no reason for someone to update the entire code base. They can simply just get the changes from the project(s) they care about.
Git Submodules are like Externals in Subversion. You can set up your git repositories so that each one is a separate layer, and then use submodules to include the projects that are needed in the various hierarchies you have.
So if for example:
/Core -- It's own git repository that contains it's base files (as you had outlined)
/SharedLib1
/SharedLib2
/UI Layer -- Own git repository
/CoreWebsite
/WebsiteHelper
/Tahiti.WebsiteHelpers
/Core -- Git Submodule to the /Core repository
/SharedLib1
/SharedLib2
This ensures that any updates to the /Core repository are brought into UI Layer repository. It also means that if you have to update your shared libraries you don't have to do it across 5-6 projects.
VS 2022 support multi-repo.
The easiest way to enable multi-repo support is to use CTRL+Q, type
“preview” and open the preview features pane. Scroll to “Enable
multi-repo support” and toggle the checkbox. This functionality is
still a preview feature, which means we are working hard to add more
support in the coming releases. In the meantime, we’re depending on
your feedback, the community, to build what you need.
See Screenshot:
https://devblogs.microsoft.com/visualstudio/multi-repo-support-in-visual-studio/

Resources