Is there an easy way to duplicate a class with a different name?
Not sure whether this can be qualified as the easiest way but if you have ReSharper, you can use its Copy Type refactoring to copy classes/interfaces/structs with control over the namespace the copy is landing in and naming within the copy - which means that if you're copying a class with 5 constructors, the copy will have all of them renamed to match the name of the new class.
However, depending on what you're trying to achieve, using Extract Interface or Extract Superclass might be a better option.
No, there's no refactoring tool for duplicating a class but with different name.
I would imagine that the reason why this feature is not present is because duplicating code is generally considered a bad idea. I'd suggest instead changing your class into a base class and then make two derived classes from it, overriding methods where you need to change the behaviour.
Refactor to give a new name, copy, undo, paste :-) Remember the undo! This will rename the constructors and finalizers!
Open file in Visual Studio.
Press Ctrl+A.
Press Ctrl+C.
Create a new file for the new class.
Press Ctrl+V in the new file.
Replace all old class names with the new one. (Ctrl+H)
First of all: now there is the option of "Copy Class" with the MS powertools for VS (marketplace)
Second: to all the critics that know better - there are legitimate cases where you want to copy a class as a starting point. I for example have a base class and from it I derive dozens of classes with different implementations, yet I do have a "reference class" derived from the base which already contains the overides, comments, etc. so I do not have to repeat the same over and over again.
This is just one of the many situation where you want to copy a class.
It's usually better to refactor your class to be reusable. Copy paste code leads to having to fix the same code in multiple places and much pain as you're breaking OO principles.
Related
I've discovered that I can split a class into multiple files by calling class <sameclassname>; <code> ;end from within each file. I've decided to divide up a very large class this way. The advantages I see:
I can have separate spec files called by guard to reduce spec time.
Forces me to organize and compartmentalize my code
Are there any pitfalls to this method? I can't find any information about people doing it.
I often do this in my gem files to manage documentation and avoid long files.
The only issues I found (which are solvable) are:
Initialization of class or module data should be thoughtfully managed. As each 'compartment' (file) is updated, the initialization data might need updating, but that data is often in a different file and we are (after all) fallible.
In my GReactor project (edit: deprecated), I wrote a different initialization method for each section and called all of them in the main initialization method.
Since each 'compartment' of the class or module is in a different file, it is easy to forget that they all share the same namespace, so more care should be taken when naming variables and methods.
The code in each file is executed in the order of the files being loaded (much like it would be if you were writing one long file)... but since you 'close' the class/module between each file, than your method declaration order might be important. Care should be taken when requiring the files, so that the desired order of the code execution is preserved.
The GReactor is a good example for managing a Mega-Module with a large API by compartmentalizing the different aspects of the module in different files.
There are no other pitfalls or issues that I have experienced.
Defining / reopening the same class in many different files makes it harder to locate the source of any given method, since there's no one clear place for it.
This also opens up the possibility of nasty loading sequence bugs, eg. file A is trying to call a method in file B, but file B has not loaded yet.
Having a very large class is a sign that the class is trying to do too much, and should be split up into smaller modules/subclasses. Sandi Metz's POODR recommends limiting classes to under 100 lines, among other guidelines.
In Ruby classes are never closed. What you call "splitting" is actually just reopening the class. You can reopen classes and add methods to them at any time. If you define a class in file A and include it in file B, even if you reopen the class in file B it'll still contain all the code from file A. I personally prefer only to reopen a class when I have to. It sounds like in your case, I would define my class in one file. I think this method is better organized and has a lower risk of interfering with previously defined methods. More on the subject at rubylearning.
Here's a good collection of Ruby design patters, or actually design pattern examples in Ruby: https://github.com/nslocum/design-patterns-in-ruby
Take a look at decorator as a good way to achieve modularity without a rigid parent<->child tree of classes.
The only pitfall is that your class is split in multiple files, that you need to menage. User of your class would only need to require the second file, so if your class is part of gem or some package, they probably wouldn't even be aware that it was ever reopened.
Is it good in practice to create two different classes in one ruby script file like this?
Ruby file name - XYZ.rb
class Car
.........
.........
end
class Bike
.........
.........
end
What will be the advantage or disadvantage of this?
The main advantage of multiple classes in one file is that you can quickly try something and hack stuff together. I'd suggest not doing so when coding on a more serious project or work, because as Sergio Tulentsev stated, you can get problems in environments with autoloading. Another disadvantage is that you lose track of what classes are located in what file.
It's gonna give you problems if you use it in an environment with autoloading (e.g. Rails). Other than that, I don't see any technical reasons not to put more than one class in the same file. But I'd still have a file per class, just for convenience. It makes navigation easier, because "go to file" editor command is now effectively a "go to class". There are other benefits as well.
To summarize: in anything other than a script that you put together over a coffee-break, use file per class.
Try as I might to keep [SuppressMessage]s to a minimum in the codebase, new ones continue to get introduced. I would like some way to say in the code "I've reviewed this [SuppressMessage] and it is 'acceptable'".
One thought was to create my own My.CodeAnalysis.SuppressMessageAttribute class which inherits from System.Diagnostics.CodeAnalysis.SuppressMessageAttribute, but that class is sealed.
I'm sure I could cobble together something with a comment near the attribute and a post-processing step, but I'd like to stick to what is available "in the box" with Visual Studio 2010.
Even if the attribute weren't sealed, a subclassing approach would not work since the code analysis engine screens for exact SuppressMessageAttribute instances. As current versions are written, the engine would completely ignore subclass instances.
Personally, I use the following approach for management of suppressions:
All "permanent" suppressions must
have a Justification property that
explains the reason for the
suppression.
Where possible, permanent
suppressions must be placed in the
target code file. Otherwise, they
must be placed at the top of the
GlobalSuppressions file.
Temporary suppressions must all be
placed below a "Temporary
suppressions" header comment in the
GlobalSuppressions file, and they
should not be assigned a
justification.
All permanent suppressions are
subject to code review, in exactly
the same manner as the code itself.
I use difference detection to flag
code for review, so I can see which
suppressions have been added or
changed just as easily as I can see
which code has been modified.
Temporary suppressions must all be removed by a pre-defined phase of the project. After this point has passed, a rule that detects missing Justification properties is enabled.
If this sort of thing wouldn't work for you, another approach would be to use stand-alone FxCop to annotate your suppressed violations. Unfortunately, these annotations would reside outside your code base, but maybe that would suit you better if you want to prevent developers from "blessing" their own suppressions.
When you right-click on the CA warning in the errors list, one of the options given is Suppress Message(s) -> In Project Suppression File. When you select that, Visual Studio will add a line like the following to a GlobalSuppressions.cs file in your project:
[assembly: System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Naming", "CA1704:IdentifiersShouldBeSpelledCorrectly", MessageId = "Autofac")]
This keeps them grouped together and out of your code files. Is this what you are trying to get at?
Is it possible to search or filter intellisense in visual studio?
Basically i know there is an enum in the project that contains 'column', but the enum doesnt begin with 'c'.
There has been lots of times where id rather not scroll through the hundreds (if not thousands) of valid objects it gives me.
I wonder if the real answer here is (and I won't be surprised to be voted down for this) that your enum isn't properly named. If it was then I'd expect the name to be obvious in the use context, may be consider renaming the enum?
You can search in Class View. Type "column" and hit enter.
Visual Studio 2010 changes all of this, giving you multiple very easy ways to do this type of search quickly.
If you're using ReSharper, you can use "Go To Symbol..." and type "column", and it will give you all symbols (types, properties, fields, methods, etc) that match.
Otherwise your best bet is to use the Object Browser and search.
I really don't know about doing that in intellisense itself, but assuming the objective is to actually find a member whose name you don't remember, you can write a small utility for that purpose using the underlying mechanism intellisense uses, reflection.
Open the Object Browser under View menu. From there, you can search within all the language constructs available to you.
I have this:
SolutionName: Foo.sln
Assembly:
Foo.Bar
Namespaces are:
Foo.Bar.Views
Foo.Bar.Model
Foo.Bar.BusinessObjects
Foo.Bar.Services
Should the directory structure be like this?
__Foo/Foo.Bar/Foo.Bar.View__ or __Foo/Bar/View__
If you keep the Visual Studio option of "Automatic Namespaces" you would need to have Foo/Bar/Views. Since this is the default behavior of Visual Studio people will be most used to this. Plus it keeps your folder names/paths from getting excessively long.
This is entirely personal preference. I would choose the latter.
Ask yourself the question, "Am I adding any useful information by repeating Foo and Bar in the sub-folders?" The answer here, in my opinion, is no, simply because the information is redundant. You've also created yourself a maintenance problem; if you need to rename Bar you now have to rename Foo.Bar, Foo.Bar.View, Foo.Bar.Model ...
Well, it can be anything you wish. Either are valid, but the former might get a little redundant and lead to directory hierarchies which are a little hard on the eyes/prohibitive.
Foo/Bar/View seems to be natural. Also MSVS tends to map solution folders to namespaces (i.e. if you add Abc folder to your solution each class you add to this folder will be in root_namespace.Abc namespace)
Foo/Bar/View seems a better choice, more if individual files are named as the individual types (e.g. Foo.Bar.View.IView => Foo/Bar/View/IView.cs).
That's a great pattern to follow, which makes it easier to find types, scripting over sources, do some metrics, etc.