In my mind this sounds like a superb idea. Using the EnvDTE would make this possible too, so why isn't there more examples on this available?
Maybe I'm missing an disadvantage of doing this...?
Any pointer to good T4 and EnvDTE resources would be great. :)
You probably don't see it around much because it's actually quite difficult to implement well. I've been using T4 to generate model classes from WCF DTOs for use in an WinForms MVP variant for a while now, and it took quite some time to get it working right.
Using a class as a "data" source for a template is pretty difficult in-and-of itself. You'll need to choose between using reflection (or a similar API) to read compiled IL or CodeDom to read the source code. If you choose to work with compiled assemblies, you'll need to contend with problems like file locking and loading referenced assemblies. If you choose to work with source code, you'll need to deal with potential uncompilable code.
Once you've made that decision, copying properties will be the most trivial thing you'll need to do. You'll also need to make decisions about which interfaces and attributes (if any) on the source class should be reimplemented/copied to the generated class. Depending on how you're implementing things like validation, this can raise all sorts of little, picky problems. There's are also a lot of fun decisions to make around how to handle inheritance hierarchies and references to other model classes.
All of the above is addressable, but a one-size-fits-all approach would be pretty hard to implement. Returning to the "example" part of your question, there's also the potential issue of doing quite so much work without getting paid for it. I'd love to be able to share the T4 I created for model generation, but it belongs to my employer, and I have better things to do with my spare time than re-implement the approach for posting on the web...
Using a class as a "data" source for a template is pretty difficult
This is wrong. Look at asp mvc 3 scaffolding.
http://blog.stevensanderson.com/2011/01/13/scaffold-your-aspnet-mvc-3-project-with-the-mvcscaffolding-package/
Related
Related to my previous question. If I define an interface I'll comment its members. I then don't comment the implementing class's implementation unless there is a reason the original comment is no longer valid.
Resharper is fine with this, Visual studio claims it's a warning.
Importantly the inherited comments are displayed through intellisense when you work with them, which is pretty much my only real concern.
What are your thoughts on this?
Thanks
Adding comments to your code is always a good practice. If a component is a private or internal class, and it will be always exposed via a known interface or abstract class that has all comments in place, then you may only need to comment specific things on the implementation of that class (for example if more than one person is going to look at the code, or if you happen to return to your code after a few years). That way it will be easier to understand what the code does and why. If you have enabled XML documentation to be generated when you build the project, Visual Studio will warn you for undocumented members.
I am also receiving Resharper warnings on some classes when I enable the XML documentation generation, but Resharper warns only for items with public visibility.
To shorten the documentation work, I'd recommend commenting public classes and interfaces first (especially if you are releasing a product library), and if there is enough time, the internal/private ones. If you decide not to comment the latter, just make sure you or anyone who will be working with the code will easily understand the logic and the reasons behind it.
I just ran into the same "issue", and I think Visual Studio is properly reporting a code smell.
Your issue, if I understand correctly, is that it isn't DRY to have what amount to the same comments on your interface and on your implementation. That makes a lot of sense -- much of the time, especially when you're mocking and testing, code is going to use the interface, not the implementation, when you have one. Why duplicate?
Well, I bet your class is marked public. If that's the case, the class CAN be used without the interface by outside code. Those external users deserve some comments, and you never know when you have extra public methods that aren't captured in the list of interfaces you're implementing. Comment it up!
If you don't want to make these comments, however (at least in VS 2017; I realize you were using 2013, which I don't have handy), you can mark the implementing class internal and skip the comments.
And then your unDRY comment problem is solved.
I have used T4 to generate partial classes from some input file (XML, etc) and then hand code additional partial bits onto those generated classes.
Is it possible to go the other way? To hand craft partial classes, and use T4 to template boiler plate bits to them?
Obviously I can't use reflection to look for the classes since it's not compiled yet, but I see Visual Studio inspect uncompiled code for different utilities. Perhaps Visual Studio offers some feature to support this I don't know about. Long shot, I guess.
Thanks
Also, you can use T4 with VS's CodeModel to read the code in your project without compiling and then generate from that metadata.
There's some pointers to examples here: http://blogs.msdn.com/b/garethj/archive/2009/09/25/dte-and-t4-better-together.aspx
Actually, T4 is used this way frequently. Yes, it requires reflection, but partial classes compile even if bits of them aren't generated yet. I would look at examples for generating strongly typed views as described here for examples of using reflection to generate new files.
I understand this is a subjective question and, as such, may be closed but I think it is worth asking.
Let's say, when building an application using TDD and going through a refactor, a library appears. If you yank the code out of your main application and place it into an separate assembly, do you take the time to write tests that cover the code, even though your main application is already testing it? (It's just a refactor.)
For example, in the NerdDinner application, we see wrappers for FormsAuthentication and MembershipProvider. These objects would be very handy across multiple applications and so they could be extracted out of the NerdDinner application and placed into their own assembly and reused.
If you were writing NerdDinner today, from scratch, and you noticed that you had a grab-bag of really useful wrappers and services and you bring them into a new assembly, do you create new tests that fully cover your new assembly--possibly having repeat tests? Is it enough to say that, if your main application runs green on all its tests, your new assembly is effectively covered?
While my example with NerdDinner may be too simplistic to really worry about, I am more thinking about larger APIs or libraries. So, do you write tests to re-cover what you tested before (may be a problem because you will probably start with all your tests passing) or do you just write tests as the new assembly evolves?
In general, yes, I'd write tests for the new library; BUT it's very dependent upon the time constraints. At the least, I'd go through and refactor the unit tests that exist to properly refer to the refactored components; that alone might resolve the question.
What is best practise with regard to the placement of Interface types.
Often the quickest and easiest thing to do is place the interface in the same project as the concrete instances of it - however, if I understand things correctly this means that you are more likely to end-up with project dependency issues. Is this a fair assessment of the situation and if so does this mean interfaces should be separated out into different projects?
It depends on what you want to do. You're correct that placing interfaces and classes in the same assembly will somewhat limit the usefulness of the abstraction of said interfaces. E.g. if you want to load types in an AppDomain with the purpose of unloading these again, you would typically access instances via the interfaces. However, if interfaces and classes are in the same assembly you can't load the interfaces without loading the classes as well.
Similarly if you at a later point want to supply a different set of classes for one or more interfaces you will still get all the old types if they are in the same assembly as the interfaces.
With that said I must admit that I do place interfaces and classes in the same assembly from time to time simply because I don't think that I will need the flexibility, so I prefer to keep things simple. As long as you have the option to rebuild everything you can rearrange the interfaces later if the need arises.
In a simple solution, I might have public interfaces and public factory classes, and internal implementation classes all in the same project.
In a more complicated solution, then to avoid a situation where project A depends on the interfaces in project B, and project B depends on the interfaces defined in project A, I might move the interfaces into a separate project which itself depends on nothing and which all other projects can depend on.
I practice "big systems can't be created from scratch: big systems which work are invariable found to have evolved from small systems which worked." So I might well start with a small and simple solution with the interfaces in the same project as the implementation, and then later (if and when it's found to be necessary) refactor that to move the interfaces into a separate assembly.
Then again there's packaging; you might develop separate projects, and repackage everything into a single assembly when you ship it.
It is a deployment detail. There are a few cases where you have to put an interface type in its own assembly. Definitely when using them in plug-in development or any other kind of code that runs in multiple AppDomains. Almost definitely when Remoting or any other kind of connected architecture.
Beyond that, it doesn't matter so much anymore. An interface type is just like another class, you can add an assembly reference if you need it in another project. Keeping them separate can help controlling versioning. It is a good idea to keep them separate if a change in an interface type can lead to wide-spread changes in the classes that implement them. The act of changing the [AssemblyVersion] when you do so now helps troubleshooting deployment issues where you forgot to update a client assembly.
Visual Studio seems to want to put class contructor code and event handling code in the .h file. I have only been involved in small 1 man projects and was wondering what the general industry standard was.
For Visual C++ Application projects what code would one put in the .h file? I am used to the mode classical C++ way of declaring your class in the .h file and coding in the .cpp file. Does this still apply to Visual Studio applications?
I have a strong C background which would explain my preference for this. The VSC++ compiler doesn't seem to mind.
In short: What is one supposed to put in which type of file?
TIA
Ends
There is no widely accepted industry standard. By putting (short) function definitions in the header, you give the compiler a better chance to inline the code. The benefit is that it can make the code run faster (keep those functions short, though). However, this comes at the cost of exposing more code to the clients who include that header, making you (or your colleagues) recompile more files when you change the implementation.
You also have to take into account the cost of going against your tools. Since VC++'s wizards insist on putting the functions in the headers, you have to move them everytime if you disagree.
It's really project-specific, I would say.
If you're using MFC and you're talking about the generated code, it's best to leave it alone.
If you're trying to do 'normal' C++ development, put as little as you can get away with in the header, as it means client code doesn't depend on too many implementation details. What you can get away with depends a little on use of templates, and how much indirection your performance budget can support.
For Visual C++ Application projects
what code would one put in the .h
file? I am used to the mode classical
C++ way of declaring your class in the
.h file and coding in the .cpp file.
Does this still apply to Visual Studio
applications?
Short: Yes
Long: Depends on the person or language. In c++ the header is for declaring and cpp for the coding. For C# you have one file (or if you use interfaces, 2)
This might seem minor, but just remember: headers are #included in several places. (And headers including headers complicates things further.) Any time you change a header, a lot of files are gonna be compiled again. Keeping as little of frequently changing code in the header reduces recompilation of dependant files.
Another thing: an uncluttered header file gives you a quick overview of what a class/form has to offer.