Allow underscore imports of companion object's items in Scalastyle - scalastyle

I'd like to make scalastyle to ignore underscore imports in case when it is companion object's fields that are being imported(it makes sense to me):
class Item {
import Item._ //scalastyle marks it as a warning
}
object Item {
case object Nested
def someMethod(): Unit = {..}
}
UnderscoreImportChecker is responsible for this inspection, but it has no configuration parameters
<check level="warning" class="org.scalastyle.scalariform.UnderscoreImportChecker" enabled="true"></check>
Here is the similar question:
Companion class requires import of Companion object methods and nested objects?
I'd like to ask if there is a way to write a custom Checker(I'll try to investigate it)?
P.S. I'm aware that I can use //scalastyle:off or explicit imports but it would be repetitive and inconvenient to use.

Related

How to handle different #Reference in OSGi DS Component

I'm having a problem with the following situation: The Server is waiting for one or more Functions. When a Function is bound the bindFunction is called. It needs to call doSomething() of any SpecificSystem.
When there is no SpecificSystem in my OSGi Container nothing happens which is good because the System Reference is not satisfied. The problem occurs when I add a SpecificSystem to my container. In that case the bindFunction is called before the System Reference is set leading to a NullPointerException inside bindFunction.
Is there any OSGi-way to make sure the System Reference is set when the bindFunction is executed so that I can safely call system.doSomething() inside the bindFunction?
You're treading in dangerous water here :-) You require ordering. Your code assumes the bindFunction reference is called after the system reference.
The OSGi specification guarantees that injection takes place in the lexical order of the reference name. (Of course, this is only true for the available services.)
The cheap way is to name your references so that the system reference's name is lexically lower than the name of the bindFunction reference, for example asystem or _system. The injection takes place in the lexical order.
This is ugly if course. A way to handle this is just inject the Function services and use them when needed instead of actively doing something in your bind function. This makes things more lazy which is almost always good.
In your example it looks like the System reference is mandatory. In this case your Server component will only come up if a System service is present.
import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
import org.osgi.service.component.annotations.ReferenceCardinality;
#Component(name = "ServerComponent", immediate = false)
public class Server {
#Reference(cardinality = ReferenceCardinality.MANDATORY)
System system;
#Reference(cardinality = ReferenceCardinality.MANDATORY)
protected void bindFunction(Function func) {
}
#Activate
public void activate() {
}
}
You can call the doSomething() in activate method. Osgi guarantees method call order with #Reference annotation.
Provided that system and function references is acquired, The activate method will be called by OSGi environment. You can call the system.doSomething() method in the activate() method. #Reference(cardinality = ReferenceCardinality.MANDATORY) annotation means that call activate method after the references is acquired.

What is the correct way to implement a "clone" method in languages that do not support type reflection and have no built-in cloning mechanism?

Problem background
Suppose I have a class called Transaction and a subclass called NetTransaction. The Transaction class implements a clone method which constructs a new Transaction with multiple constructor parameters.
Such a cloning pattern presents a problem for subclasses like NetTransaction, because calling super.clone will return an object of type Transaction which cannot be up casted to NetTransaction. Therefore, I'd have to reimplement (duplicate) the code in the Transaction class's clone method. Obviously, this is an unacceptable pattern.
Java's solution -- works for languages with built-in cloning logic or type reflection
In Java (so I've read), calling super.clone always returns an object of the correct type as long as every override in the chain calls super.clone, because the base Object's clone method will automatically return an object of the correct type, presumably a feature built into the runtime.
The existence of such a clone method implies that every clonable object must have a parameterless default constructor (either explicitly or implicitly) for two reasons. Firstly, Object's implementation would not be capable of choosing an arbitrary constructor for a subclass it knows nothing about, hence the need for a parameterless constructor. Secondly, although a copy constructor might be the next logical choice, it implies that every object in the class chain would also have to have a copy constructor, otherwise every copy constructor would be faced with the same decision as clone (i.e. to call the default constructor or a copy constructor). That ultimately implies that all the cloning logic would have to be in copy constructors, which would make overriding "clone" unnecessary; therefore, we arrive at the logical conclusion that it would be self-defeating to have clone call anything other than a parameterless default constructor (i.e. the runtime would have to create an instance that require no special construction logic to run).
So Java's cloning implementation, which also seems to provide some built-in shallow copying, is one way to implement cloning that makes sense.
Correct alternative for languages without built-in cloning or type reflection?
But what about other languages that don't have such built-in functionality and lack type reflection? How should they implement cloning? Are copy constructors the only way to go?
I think the only way that really makes sense is copy constructors, and as far as implementing or overriding a clone method for the sake of returning a common interface or base type or just "object", the correct implementation is to simply always call the current object's copy constructor. Is this correct?
The pattern would be, in C# for example:
class A
{
public A( A original_to_copy ) { /*copy fields specific to A*/ }
public object clone() { return new A( this ); }
}
class B: A
{
public B( B original_to_copy ):this (original_to_copy) { /*copy fields specific to B*/ }
public override object clone() { return new B( this ); }
}
class C: B
{
public C( C original_to_copy ):this(original_to_copy) { /*copy fields specific to C*/ }
public override object clone() { return new C( this ); }
}
In systems without a built-in cloning facility, there's really no alternative to using a virtual clone method chain to a copy constructor. I would suggest, however, that one should have the copy constructor and virtual cloning method be protected, and have the base-class copy constructor throw an exception if the exact types of the passed-in object does not match the exact type of the object under construction. Public cloning methods should not be virtual, but should instead chain to the virtual method and cast the result to their own type.
When practical, one should avoid having classes which expose public cloning methods be inheritable; consumers should instead refer to class instances using interface types. If some of the consumers of a type will need to clone it and others won't, some potential derivatives of the type could not logically be cloned, and if a derivative of the type which wasn't cloneable should be usable by code that doesn't need to clone it, splitting things that way will allow for the existence of BaseFoo, CloneableBaseFoo, FancyFoo, and CloneableFancyFoo types; code which needs fancy abilities but doesn't need to clone an object will be able to accept FancyFoo and CloneableFancyFoo objects, while code that doesn't need a fancy object but needs cloning ability will be able to accept CloneableBaseFoo and CloneableFancyFoo objects.

How do I discern whether a Type is a static array initializer?

I'll start by saying that I'm working off the assumption that static array initializers are turned into private nested classes by the compiler, usually with names like __StaticArrayInitTypeSize=12. As I understand it, having read this extremely informative article, these private classes are value types, and they aren't tagged with the CompilerGeneratedAttribute class.
I'm working on a project that needs to process certain types and ignore others.
I have to be able to process custom struct types, which, like the generated static array initializer classes, are value types. I must ignore the generated static array initializer classes. I also must ignore enumerations and delegates.
I'm pulling these classes with Linq, like so:
var typesToProcess = allTypes.Where(type => !type.IsEnum &&
!type.IsArray &&
!type.IsSubclassOf(typeof(Delegate)));
I'm fairly sure that the IsArray property isn't what I think it is. At any rate, the generated static array initializer class still shows up in the typesToProcess Enumerable.
Has anyone else dealt with this? How can I discern the difference between a custom struct and a generated static array initializer class? I could hack it by doing a string comparison of the type name against __StaticArrayInitTypeSize, but is there a cleaner solution?
Well, having just tried it myself with the C# 4 compiler, I got an internal class called <PrivateImplementationDetails>{D1E23401-19BC-4B4E-8CC5-2C6DDEE7B97C} containing a private nested struct called __StaticArrayInitTypeSize=12.
The class contained an internal static field of the struct type called $$method0x6000001-1. The field itself was decorated with CompilerGeneratedAttribute.
The problem is that all of this is implementation-specific. It could change in future releases, or it could be different from earlier releases too.
Any member name containing <, > or = is an "unspeakable" name which will have been generated by the compiler, so you can view that as a sort of implicit CompilerGenerated, if that's any use. (There are any number of other uses for such generated types though.)

Where do you add new methods?

When you add a new method to a class where do you put it? At the end of the class...the top? Do you organize methods into specific groupings? Sorted alphabetically?
Just looking for general practices in keeping class methods organized.
Update When grouped where do you add the new method in the group? Just tack on the end or do you use some sort of sub-grouping, sorting?
Update 2 Mmmm...guess the question isn't as clear as I thought. I'm not really looking for class organization. I'm specifically interested in adding a new method to an existing class. For example:
public class Attendant
{
public void GetDrinks(){}
public void WelcomeGuests(){}
public void PickUpTrask(){}
public void StrapIn(){}
}
Now we're going to add a new method PrepareForCrash(). Where does it go? At the top of the list, bottom, alphabetically or near the StrapIn() method since it's related.
Near "StrapIn" because it's related. That way if you refactor later, all related code is nearby.
Most code editors allow you to browse method names alphabetically in another pane, so organizing your code functionally makes sense within the actual code itself. Group functional methods together, makes life easier when navigating through the class.
For goodness sake, not alphabetically!
I tend to group my functions in the order I expect them to be called during the life of the object, so that a top to bottom read of the header file tends to explain the operation of the class.
I think it's a personal choice.
However I like to organise my classes as such.
public class classname
{
<member variables>
<constructors>
<destructor>
<public methods>
<protected methods>
<private methods>
}
The reason for this is as such.
Member variables at the top
To see what member variables exist and if they are initialised.
Constructors
To see if the member variables are setup/initialised as well as what are all the construction options for the class.
Destructor
To see the how the class is cleaned up and verify it with the constructors and member variables.
Public methods
To see what are the available contracts callers of the object can use.
Protected methods
To see what inherited classes would be using.
Private methods
As it's information about the internals of the class if you needed to know about the internals you can just scroll straight to the end quickly. But to know the interface for the class it's all at the start.
UPDATE - Based on OP's update
Logically a good way would be to organise the methods by categories of what they do.
This way you get the readabilty of categorising your methods as well as the alphabetical search from you IDE (provided this is in your IDE).
However in a practical sense I think placing the methods at the end of that section is the best way. It would be quite hard to continually police where each method goes, as it's subjective, for every method if the code is shared by more than yourself.
If you were to make this a standard it'd be quite hard to provide the boundaries for where to put each method.
What I like about C# and VB.net is the ability to use #region tags, so generally my classes look like this
class MyClass
{
#region Constructors
public MyClass()
{
}
public MyClass(int x)
{
_x = x;
}
#endregion
#region Members
private int _x;
#endregion
#region methods
public void DoSomething()
{
}
#endregion
#region Properties
public int Y {get; private set;}
#endregion
}
So basically You put similar things together so you can collapse everything to definition and get to your stuff really faster.
Generally, it depends on the existing grouping; if there's an existing grouping that the new method fits into, I'll put it there. For example, if there's a grouping of operators, I'll put the new method with the operators if it's an operator.
Of course, if there is no good grouping, adding a method may suggest a new grouping; I treat that as an opportunity for refactoring, and try to regroup the existing operators where reasonable.
I organize all methods into regions like public methods, private methods or sometimes by features like Saving methods, etc..
IMHO:
If you organize your methods alphabetically, put a new one depends on its name. Otherwise put it at the bottom of related group. This helps to know, what method is newer. The bigger problem is how to organize methods in groups, e.g. depend on what properties, but this is more individual for everyone and depends on a specific class.

Where is the best place to locate enum types?

I have found that there is generally a singe type or namespace that takes in any particular enum as a parameter and as a result I have always defined those enums there. Recently though, I had a co-worker make a big deal about how that was a stupid thing to do, and you should always have an enum namespace at the root of your project where you define everyone of your enum types.
Where is the best place to locate enum types?
Why treat enums differently to other types? Keep them in the same namespace as they're likely to be used - and assuming they're going to be used by other classes, make them top-level types in their own files.
The only type of type which I do commonly clump together is delegates - I sometimes have a Delegates.cs file with a bunch of delegates in. Less so with .NET 3.5 and Func/Action, mind you.
Also, namespaces are for separation of things that belong together logically. Not all classes belong in the same namespace just because they are classes. Likewise, not all enums belong in the same namespace just because they are enums. Put them with the code they logically belong in.
I generally try to put all my different types (classes, interfaces and enums) in their own files, regardless of how small they are. It just makes it much easier to find and manage the file they're in, especially if you don't happen to be in Visual Studio and have the "go to definition" feature available. I've found that nearly every time I've put a "simple" type like that in another class, I end up either adding on to it later on, or reusing it in a way that it no longer makes sense for it to not have its own file.
As far as which namespace, it really depends on the design of whatever you're developing. In general, I try to mimic the .NET framework's convention.
I try to put everything associated with a class in the class. That includes not just enums, but also constants. I don't want to go searching elsewhere for the file or class containing the enums. In a large app with lots of classes and folders, it wouldn't always be obvious where to put the enum file so it would be easy to find.
If the enum if used in several closely-related classes, you could create a base class so that the common types like enums are shared there.
Of course, if an enum is really generic and widely used, you may want to create a separate class for them, along with other generic utilities.
I think you put Enums and Constants in the class that consumes them or that uses them to control code decisions the most and you use code completion to find them. That way you don't have to remember where they are, they are associated with the class. So for example if I have a ColoredBox class then I don't have to think about where they are at. They would be part of ColoredBox. ColoredBox.Colors.Red, ColoredBox.Colors.Blue etc. I
I think of the enum and constant as a property or description of that class.
If it used by multiple classes and no one class reigns supreme then it is appropriate to have an enum class or constants class.
This follows rules of encapsulation. Isolating properties from dissimilar classes. What if you decide to change the RGB of Red in Cirle objects but
you don't want to change the red for ColoredBox objects? Encapsulating their properties enables this.
I use nested namespaces for this. I like them better than putting the enum within a class because outside of the class you have to use the full MyClass::MyEnum usage even if MyEnum is not going to clash with anything else in scope.
By using a nested namespace you can use the "using" syntax. Also I will put enums that relate to a given subsystem in their own file so you don't get dependency problems of having to include the world to use them.
So in the enum header file you get:
// MyEnumHeader.h
// Consolidated enum header file for this dll,lib,subsystem whatever.
namespace MyApp
{
namespace MyEnums
{
enum SomeEnum { EnumVal0, EnumVal1, EnumVal2 };
};
};
And then in the class header file you get:
// MyInterfaceHeader.h
// Class interfaces for the subsystem with all the expected dependencies.
#include "MyEnumHeader.h"
namespace MyApp
{
class MyInterface
{
public:
virtual void DoSomethingWithEnumParam (MyEnums::SomeEnum enumParam) = 0;
};
};
Or use as many enum header files as makes sense. I like to keep them separate from the class headers so the enums can be params elsewhere in the system without needing the class headers. Then if you want to use them elsewhere you don't have to have the encapsulating class defs as you would if the enums were declared within the classes.
And as mentioned before, in the outer code you can use the following:
using namespace MyApp::MyEnums;
What environment?
In .NET I usually create an empty class file, rename it to MyEnum or whatever to indicate it holds my enum and just declare it in there.
If my enumeration has any chance of ever being used outside the class I intend to use it, I create a separate source file for the enum. Otherwise I will place it inside the class I intend to use it.
Usually I find that the enum is centered around a single class -- as a MyClassOptions type of thing.
In that case, I place the enum in the same file as MyClass, but inside the namespace but outside the class.
namespace mynamespace
{
public partial class MyClass
{
}
enum MyClassOptions
{
}
}
I tend to define them, where their use is evident in the evident. If I have a typedef for a struct that makes use of it for some reason...
typedef enum {
HI,
GOODBYE
} msg_type;
typdef struct {
msg_type type;
union {
int hivar;
float goodbyevar;
}
} msg;

Resources