My question isn't so much about use of interfaces but more of a project organization nature.
Note: I am using VisualStudio in a multi-layered application.
Should my Interface files live in a separate project from their implementations? My initial thought is that it would be useful to separate out all my service interfaces into their own project (and a project for my initial implementations) so that down the road the implementation/concrete project may be removed and replaced with a new one if necessary.
To clarify with an example: Suppose I have a business layer Interface called IBusinessService which lives in the MyApp.Business.Services namespace. My implementation FooBusinessService would exist in the same namespace, but a different project in VisualStudio. If later on the implementation needed to be reworked, a developer could remove the reference to the FooService.proj and replace it with a reference to BarService.proj.
This seems like it would declutter the app solution by allowing you to reference a project with only interfaces without also acquiring concrete implementations (which may be obsolete or of no use to you), but am I missing something?
I'm with you. I prefer to put my interfaces in a separate project AND in a different namespace. The classic example is with data access classes. You want to be able to code an MSSQL version and a MySQL version, both implementing the same interface. As such, I prefer that the interface definition be in a separate assembly/project. Here's an example of how I lay out assemblies and namespaces:
Elder.DataAccess.Core - contains the interfaces and common utilities
Elder.DataAccess.MSSQL - specific MSSQL implementations of the interfaces
Elder.DataAccess.MySQL - specific MySQL implementations of the interfaces
This allows me to modify the implementations without touching the project that contains the interface definitions. This helps me with version control and change tracking, too. There might be other ways to skin this cat, so I'll be eager to see other folks' answers.
Related
I've recently had to look for a C# porting of the Protocol Buffers library originally developped by Google. And guess what, I found two projects owned both by two very well known persons here: protobuf-csharp-port, written by Jon Skeet and protobuf-net, written by Marc Gravell. My question is simple: which one do I have to choose ?
I quite like Marc's solution as it seems to me closer to C# philisophy (for instance, you can just add attributes to the properties of existing class) and it looks like it can support .NET built-in types such as System.Guid.
I am sure both of them are really great projects but what's your oppinion?
I agree with Jon's points; if you are coding over multiple environments, then his version gives you a similar API to the other "core" implementations. protobuf-net is much more similar to how most of the .NET serializers are implemented, so is more familiar (IMO) to .NET devs. And as Jon notes - the raw binary output should be identical so you can re-implement with a different API if you need to later.
Some points re protobuf-net that are specific to this implementation:
works with existing types (not just generated types from .proto)
works under things like WCF and memcached
can be used to implement ISerializable for existing types
supports inheritance* and serialization callback methods
supports common patterns such as ShouldSerialize[name]
works with existing decorated types (XmlType/XmlElement or DataContract/DataMember) - meaning (for example) that LINQ-to-SQL models serialize out-of-the-box (as long as serialization is enabled in the DBML)
in v2, works for POCO types without any attributes
in v2, works in .NET 1.1 (not sure this is a huge selling feature) and most other frameworks (including monotouch - yay!)
possibly (not yet implemented) v2 might support full-graph* serialization (not just tree serialization)
(*=these features use 100% valid protobuf binary, but which might be hard to consume from other languages)
Are you using other languages in your project as well? If so, my C# port will let you write similar code on all platforms. If not, Marc's port is probably more idiomatic C# to start with. (I've tried to make my code "feel" like normal C#, but the design is clearly based on the Java code to start with, deliberately so that it's familiar to those using Java as well.)
Of course one of the beauties of this is that you can change your mind later and be confident that all your data will still be valid via the other project - they should be absolutely binary compatible (in terms of serialized data), as far as I'm aware.
According to it's GitHub project site protobuf-csharp-port has now been folded into the main Google Protocol Buffers project, so it will be the official .NET implementation of protobuf 3. protobuf-net however was last updated in 2013, although there have been some commits recently in GitHub.
I just switched from protobuf-csharp-port to protobuf-net because:
protobuf-net is more ".net like", i.e. descriptors to serialise members instead of code generation.
If you want to compile protobuf-csharp-port .proto files you have to do a 2 step process, i.e. compile with protoc to .protobin and then compile that with protoGen. protobuf-net does this in one step.
In my case I want to use protocol buffers to replace an xml based communication model between a .net client and a j2ee backend. Since I'm already using code generation I'll go for Jon's implementation.
For projects not requiring java interop I'd choose Marc's implementation, especially since v2 allows working without annotations.
I created a new .NET Core 1.1 solution and noticed an odd behavior: if I create multiple projects in the solution and chain-reference them, I'm able to freely access types located in a dependency of a dependency, any level down.
This is an example:
We have a Sandbox solution, a Baz class library project, a Bar class library project referencing Baz and a Foo console app project referencing Bar.
From the Foo project I'm able to access and use BazThing, a type defined in the Baz project, even though Foo doesn't have a reference on Baz.
This works with NuGet packages too: if I add Entity Framework to Baz through NuGet I'm able to use the DbContext type in the Foo project.
This is a huge problem when the projects are used to implement layer segregation.
Developers are now allowed to access and use implementation details of the dependencies or bypass layer segregation, both "maliciously" and by mistake (in the above mentioned example, IntelliSense will happily suggests to use BazThings when typing ba without any warning).
Is this how things will work from now on, or are we missing something?
Is it possible to prevent/inhibit this behavior somehow?
Where can I find documentation about this behavior?
That is intended behavior of modern NuGet/msbuild. It is called meta-packages/transitive dependencies and used for example for the NETStandard.Library package to include all libraries of the base class library. I do not think there is a way hiding them. Expect this feature to be added to full .NET Framework projects within the next releases of VS.
Aside of your questions, my personal opinion here is that hiding artifacts behind reference trees is maybe useful at first sight but does not bring any architectural benefit. A loaded assembly can be invoked. One way or the other. Layering, layer bridging and adherence to architecture can be only teached/learned/reviewed/documented. Teach the devs but do not build walls around them. Discipline should not be enforced by the artifacts but by the devs themselves.
I'm currently using a method where I have a "base" file that defines the types, interfaces and a basic API for the package. I then create an _windows.go and _linux.go file and add platform specific types that I can apply the interface to. The setup is basically like this: http://play.golang.org/p/2DJxTuSAIh.
Is this considered best practice?
Would this assist in a team setting where some developers are linux focused and some windows focused, i.e. if the interface changes both teams will be notified via build failure?
The use of interfaces is an orthogonal concept. Use an interface where an interface makes sense, but it's often simpler just provide an implementation by the same name in the proper GOOS and GOARCH files.
The method of using a common constructor name (from your example) is also used in places in the std lib, as is the method of assigning a global variable name to a function (which is similar in concept to the former method).
Because Go is statically typed, and you can't redeclare global identifiers, the build system will always catch problems; it's just a matter of testing for all applicable systems to ensure that no OS or ARCH has an out of date implementation.
What is best practise with regard to the placement of Interface types.
Often the quickest and easiest thing to do is place the interface in the same project as the concrete instances of it - however, if I understand things correctly this means that you are more likely to end-up with project dependency issues. Is this a fair assessment of the situation and if so does this mean interfaces should be separated out into different projects?
It depends on what you want to do. You're correct that placing interfaces and classes in the same assembly will somewhat limit the usefulness of the abstraction of said interfaces. E.g. if you want to load types in an AppDomain with the purpose of unloading these again, you would typically access instances via the interfaces. However, if interfaces and classes are in the same assembly you can't load the interfaces without loading the classes as well.
Similarly if you at a later point want to supply a different set of classes for one or more interfaces you will still get all the old types if they are in the same assembly as the interfaces.
With that said I must admit that I do place interfaces and classes in the same assembly from time to time simply because I don't think that I will need the flexibility, so I prefer to keep things simple. As long as you have the option to rebuild everything you can rearrange the interfaces later if the need arises.
In a simple solution, I might have public interfaces and public factory classes, and internal implementation classes all in the same project.
In a more complicated solution, then to avoid a situation where project A depends on the interfaces in project B, and project B depends on the interfaces defined in project A, I might move the interfaces into a separate project which itself depends on nothing and which all other projects can depend on.
I practice "big systems can't be created from scratch: big systems which work are invariable found to have evolved from small systems which worked." So I might well start with a small and simple solution with the interfaces in the same project as the implementation, and then later (if and when it's found to be necessary) refactor that to move the interfaces into a separate assembly.
Then again there's packaging; you might develop separate projects, and repackage everything into a single assembly when you ship it.
It is a deployment detail. There are a few cases where you have to put an interface type in its own assembly. Definitely when using them in plug-in development or any other kind of code that runs in multiple AppDomains. Almost definitely when Remoting or any other kind of connected architecture.
Beyond that, it doesn't matter so much anymore. An interface type is just like another class, you can add an assembly reference if you need it in another project. Keeping them separate can help controlling versioning. It is a good idea to keep them separate if a change in an interface type can lead to wide-spread changes in the classes that implement them. The act of changing the [AssemblyVersion] when you do so now helps troubleshooting deployment issues where you forgot to update a client assembly.
I am creating an [Iron]Ruby project that needs to support several environments (more specifically, WPF, Silverlight, and WinForms - but that's not as important). The following is an actual, concrete example of where I'm stuck:
I have to implement a Bitmap class as part of a library, and this class will need to be implemented differently depending on what environment it's running in (e.g. if I'm running this in the browser as a silverlight app, I won't have access to methods that would be available on the desktop). And here's the catch - I don't control the instantiation of Bitmap, nor any of the other classes within the library. Why? Because it's a port of another application; and, while I do have the code for the application, I don't want to break compatibility by changing that code. I do, however, control the entry point to the application, so I can require whatever I need, perform setup, configure global variables, etc.
Edit: If you're curious, this is the project I'm working on:
http://github.com/cstrahan/open-rpg-maker
Here's what I want to know:
How should I set the configuration at startup, such that Bitmap will behave appropriately?
How should I structure this in my git repo / source tree?
Here are some of my thoughts, but I'm sure you'll have better ideas:
How should I set the configuration at startup?
When distributing the app, place a require at the top depending on the targeted environment, like so: require silverlight/bitmap. In this case, lib/bitmap.rb would be empty, while lib/silverlight/bitmap.rb would contain the implementation. Or...
Stuff all implementations in lib/bitmap.rb, and conditionally execute based on a class instance variable or constant: Bitmap.impl = "silverlight". Or...
Maintain a separate branch for each distro - despite the library being almost exactly the same.
How should I structure this in my git repo / source tree?
Separate branches per distribution. Or...
Separate implementation-specific subfolders (e.g. lib/silverlight/bitmap.rb).
Being very new to Ruby, I'm not very familiar with such best practices (I'm coming from C#). Any advice would be greatly appreciated!
-Charles