Placement of interfaces in visual studio solution - visual-studio

What is best practise with regard to the placement of Interface types.
Often the quickest and easiest thing to do is place the interface in the same project as the concrete instances of it - however, if I understand things correctly this means that you are more likely to end-up with project dependency issues. Is this a fair assessment of the situation and if so does this mean interfaces should be separated out into different projects?

It depends on what you want to do. You're correct that placing interfaces and classes in the same assembly will somewhat limit the usefulness of the abstraction of said interfaces. E.g. if you want to load types in an AppDomain with the purpose of unloading these again, you would typically access instances via the interfaces. However, if interfaces and classes are in the same assembly you can't load the interfaces without loading the classes as well.
Similarly if you at a later point want to supply a different set of classes for one or more interfaces you will still get all the old types if they are in the same assembly as the interfaces.
With that said I must admit that I do place interfaces and classes in the same assembly from time to time simply because I don't think that I will need the flexibility, so I prefer to keep things simple. As long as you have the option to rebuild everything you can rearrange the interfaces later if the need arises.

In a simple solution, I might have public interfaces and public factory classes, and internal implementation classes all in the same project.
In a more complicated solution, then to avoid a situation where project A depends on the interfaces in project B, and project B depends on the interfaces defined in project A, I might move the interfaces into a separate project which itself depends on nothing and which all other projects can depend on.
I practice "big systems can't be created from scratch: big systems which work are invariable found to have evolved from small systems which worked." So I might well start with a small and simple solution with the interfaces in the same project as the implementation, and then later (if and when it's found to be necessary) refactor that to move the interfaces into a separate assembly.
Then again there's packaging; you might develop separate projects, and repackage everything into a single assembly when you ship it.

It is a deployment detail. There are a few cases where you have to put an interface type in its own assembly. Definitely when using them in plug-in development or any other kind of code that runs in multiple AppDomains. Almost definitely when Remoting or any other kind of connected architecture.
Beyond that, it doesn't matter so much anymore. An interface type is just like another class, you can add an assembly reference if you need it in another project. Keeping them separate can help controlling versioning. It is a good idea to keep them separate if a change in an interface type can lead to wide-spread changes in the classes that implement them. The act of changing the [AssemblyVersion] when you do so now helps troubleshooting deployment issues where you forgot to update a client assembly.

Related

.NET Core 1.1 allowing access to indirect dependencies types

I created a new .NET Core 1.1 solution and noticed an odd behavior: if I create multiple projects in the solution and chain-reference them, I'm able to freely access types located in a dependency of a dependency, any level down.
This is an example:
We have a Sandbox solution, a Baz class library project, a Bar class library project referencing Baz and a Foo console app project referencing Bar.
From the Foo project I'm able to access and use BazThing, a type defined in the Baz project, even though Foo doesn't have a reference on Baz.
This works with NuGet packages too: if I add Entity Framework to Baz through NuGet I'm able to use the DbContext type in the Foo project.
This is a huge problem when the projects are used to implement layer segregation.
Developers are now allowed to access and use implementation details of the dependencies or bypass layer segregation, both "maliciously" and by mistake (in the above mentioned example, IntelliSense will happily suggests to use BazThings when typing ba without any warning).
Is this how things will work from now on, or are we missing something?
Is it possible to prevent/inhibit this behavior somehow?
Where can I find documentation about this behavior?
That is intended behavior of modern NuGet/msbuild. It is called meta-packages/transitive dependencies and used for example for the NETStandard.Library package to include all libraries of the base class library. I do not think there is a way hiding them. Expect this feature to be added to full .NET Framework projects within the next releases of VS.
Aside of your questions, my personal opinion here is that hiding artifacts behind reference trees is maybe useful at first sight but does not bring any architectural benefit. A loaded assembly can be invoked. One way or the other. Layering, layer bridging and adherence to architecture can be only teached/learned/reviewed/documented. Teach the devs but do not build walls around them. Discipline should not be enforced by the artifacts but by the devs themselves.

And the refactor begot a library. Retest?

I understand this is a subjective question and, as such, may be closed but I think it is worth asking.
Let's say, when building an application using TDD and going through a refactor, a library appears. If you yank the code out of your main application and place it into an separate assembly, do you take the time to write tests that cover the code, even though your main application is already testing it? (It's just a refactor.)
For example, in the NerdDinner application, we see wrappers for FormsAuthentication and MembershipProvider. These objects would be very handy across multiple applications and so they could be extracted out of the NerdDinner application and placed into their own assembly and reused.
If you were writing NerdDinner today, from scratch, and you noticed that you had a grab-bag of really useful wrappers and services and you bring them into a new assembly, do you create new tests that fully cover your new assembly--possibly having repeat tests? Is it enough to say that, if your main application runs green on all its tests, your new assembly is effectively covered?
While my example with NerdDinner may be too simplistic to really worry about, I am more thinking about larger APIs or libraries. So, do you write tests to re-cover what you tested before (may be a problem because you will probably start with all your tests passing) or do you just write tests as the new assembly evolves?
In general, yes, I'd write tests for the new library; BUT it's very dependent upon the time constraints. At the least, I'd go through and refactor the unit tests that exist to properly refer to the refactored components; that alone might resolve the question.

Using T4 templates to generate ViewModels

In my mind this sounds like a superb idea. Using the EnvDTE would make this possible too, so why isn't there more examples on this available?
Maybe I'm missing an disadvantage of doing this...?
Any pointer to good T4 and EnvDTE resources would be great. :)
You probably don't see it around much because it's actually quite difficult to implement well. I've been using T4 to generate model classes from WCF DTOs for use in an WinForms MVP variant for a while now, and it took quite some time to get it working right.
Using a class as a "data" source for a template is pretty difficult in-and-of itself. You'll need to choose between using reflection (or a similar API) to read compiled IL or CodeDom to read the source code. If you choose to work with compiled assemblies, you'll need to contend with problems like file locking and loading referenced assemblies. If you choose to work with source code, you'll need to deal with potential uncompilable code.
Once you've made that decision, copying properties will be the most trivial thing you'll need to do. You'll also need to make decisions about which interfaces and attributes (if any) on the source class should be reimplemented/copied to the generated class. Depending on how you're implementing things like validation, this can raise all sorts of little, picky problems. There's are also a lot of fun decisions to make around how to handle inheritance hierarchies and references to other model classes.
All of the above is addressable, but a one-size-fits-all approach would be pretty hard to implement. Returning to the "example" part of your question, there's also the potential issue of doing quite so much work without getting paid for it. I'd love to be able to share the T4 I created for model generation, but it belongs to my employer, and I have better things to do with my spare time than re-implement the approach for posting on the web...
Using a class as a "data" source for a template is pretty difficult
This is wrong. Look at asp mvc 3 scaffolding.
http://blog.stevensanderson.com/2011/01/13/scaffold-your-aspnet-mvc-3-project-with-the-mvcscaffolding-package/

Ruby: targeting multiple platforms in one project

I am creating an [Iron]Ruby project that needs to support several environments (more specifically, WPF, Silverlight, and WinForms - but that's not as important). The following is an actual, concrete example of where I'm stuck:
I have to implement a Bitmap class as part of a library, and this class will need to be implemented differently depending on what environment it's running in (e.g. if I'm running this in the browser as a silverlight app, I won't have access to methods that would be available on the desktop). And here's the catch - I don't control the instantiation of Bitmap, nor any of the other classes within the library. Why? Because it's a port of another application; and, while I do have the code for the application, I don't want to break compatibility by changing that code. I do, however, control the entry point to the application, so I can require whatever I need, perform setup, configure global variables, etc.
Edit: If you're curious, this is the project I'm working on:
http://github.com/cstrahan/open-rpg-maker
Here's what I want to know:
How should I set the configuration at startup, such that Bitmap will behave appropriately?
How should I structure this in my git repo / source tree?
Here are some of my thoughts, but I'm sure you'll have better ideas:
How should I set the configuration at startup?
When distributing the app, place a require at the top depending on the targeted environment, like so: require silverlight/bitmap. In this case, lib/bitmap.rb would be empty, while lib/silverlight/bitmap.rb would contain the implementation. Or...
Stuff all implementations in lib/bitmap.rb, and conditionally execute based on a class instance variable or constant: Bitmap.impl = "silverlight". Or...
Maintain a separate branch for each distro - despite the library being almost exactly the same.
How should I structure this in my git repo / source tree?
Separate branches per distribution. Or...
Separate implementation-specific subfolders (e.g. lib/silverlight/bitmap.rb).
Being very new to Ruby, I'm not very familiar with such best practices (I'm coming from C#). Any advice would be greatly appreciated!
-Charles

Should interfaces be in a separate project from their implementation?

My question isn't so much about use of interfaces but more of a project organization nature.
Note: I am using VisualStudio in a multi-layered application.
Should my Interface files live in a separate project from their implementations? My initial thought is that it would be useful to separate out all my service interfaces into their own project (and a project for my initial implementations) so that down the road the implementation/concrete project may be removed and replaced with a new one if necessary.
To clarify with an example: Suppose I have a business layer Interface called IBusinessService which lives in the MyApp.Business.Services namespace. My implementation FooBusinessService would exist in the same namespace, but a different project in VisualStudio. If later on the implementation needed to be reworked, a developer could remove the reference to the FooService.proj and replace it with a reference to BarService.proj.
This seems like it would declutter the app solution by allowing you to reference a project with only interfaces without also acquiring concrete implementations (which may be obsolete or of no use to you), but am I missing something?
I'm with you. I prefer to put my interfaces in a separate project AND in a different namespace. The classic example is with data access classes. You want to be able to code an MSSQL version and a MySQL version, both implementing the same interface. As such, I prefer that the interface definition be in a separate assembly/project. Here's an example of how I lay out assemblies and namespaces:
Elder.DataAccess.Core - contains the interfaces and common utilities
Elder.DataAccess.MSSQL - specific MSSQL implementations of the interfaces
Elder.DataAccess.MySQL - specific MySQL implementations of the interfaces
This allows me to modify the implementations without touching the project that contains the interface definitions. This helps me with version control and change tracking, too. There might be other ways to skin this cat, so I'll be eager to see other folks' answers.

Resources