How to divide a class project having a part in dlls and another in code (tricky question) - visual-studio-2010

currently I trying to do something a bit tricky which I don't really know if it's possible to do.
I have a class project and I want to divide it in two sections, "Core" and "Client specific developments". And my client wants the source code of this project but I don't want to deliver the source code of the "Core" section, I just want to give him the source of "Client specific developments".
So to demonstrate a practical case let's imagine that I have a partial class named "User" that have two methods "CreateUser" and "CreateUserForClientSite". So "CreateUser" method will be located in "Core" section and "CreateUserForClientSite" will extend "CreateUser" with specific requirements for my client site (remember this methods may NOT be static, so C# 3.0 class extend feature is pointless in this case). If I have the "Core" section in dll can I extend a partial class present in the dll?
Now let's imagine another scenario. What if "Core" have methods that depend on "Client specific developments" classes, and the other way around? Since I can't do circular reference between projects, how can I manage that (is possible)
Thanks

Regarding the partial classes - you must have all the parts of the partial class available at compile time. You just split definition of a class in several files, but it is still a type that belongs to one assembly.
Thus you cannot compile dll with one part and then reference that assembly in another project and add more methods to the partial class.
I suggest to replace partial classes with inheritance in your case, if possible.
More on partial classes in msdn (look at "Restrictions" section).
Regarding the circular references - you'll have to redesign your object model if splitting into two assemblies leads to this problem. Usually, this indicates flaws in the model that should be fixed anyway.
You can define interfaces in the core assembly to break the circular reference. And implement the interfaces in client specific assembly. Take a look at this article for example - How to get rid of circular references in C#

Related

Why do most Spring applications root level folders always seem to follow the com/company naming convention? [duplicate]

Why do we use reverse domain name like com.something. or org.something. structure for java packages?
I understand this brings in some sort of uniqueness, but why do we need this uniqueness?
About why we do it reversed: Imagine you have two important packages, an accounting package and a graphics package. If you specified these in 'straight' order:
accounting.mycompany.org
graphics.mycompany.org
Then it implies there is a major accounting package, a subsection of which is for mycompany, and a subsection of that package is called the org package which you actually use. However, you want this:
org.mycompany.accounting
org.mycompany.graphics
This makes more sense. Out of all packages from organizations (org), you look at mycompany in particular, and it has two sub-packages, the accounting and the graphics ones.
Globally unique package names avoid naming collisions between libraries from different sources. Rather than creating a new central database of global names, the domain name registry is used. From the JLS:
The suggested convention for
generating unique package names is
merely a way to piggyback a package
naming convention on top of an
existing, widely known unique name
registry instead of having to create a
separate registry for package names.
As you say, reverse domain names as base package name ensures uniqueness. Suppose two companies with DN example.com and example.org both define the class Employee in their framework. Now if you are using both frameworks you will not be able pinpoint which Employee you want to use in your code, but if they are defined in packages com.example and org.example respectively you can tell the compiler/JVM specifically which class you are referring to. If unique packages are not defined you will get compilation errors or runtime errors, e.g. if you are using the com employee class, but the org employee class gets loaded first from the classpath you will get a runtime error, since the two employee classes may not have same structure.
The uniqueness is needed for Class Loading.
It helps by avoiding naming collisions. If there are classes with same package name and class name, Collision will occur while trying to load the classes.
This generally happens if there are multiple libraries(jar) that contain classes with same names.
Also see this.
You need the uniqueness if you might need to integrate your code with third party software, or provide it to someone else for integration. If you don't follow the rules, you increase the risk that at some point you will have a class naming collision, and that you will need to rename lots of your classes to address it. Or worse still, that your customers will have to do the code renaming.
This also applies when code is produces as part of different projects in an organization.
As you said, it brings uniqueness, something that is needed especially when working with third party code. For example, consider that you are using a library I've made. I've used the package "foo" and have a class named Bar there. Now if you are also using the package name "foo" AND you have a class named Bar, this would mean that your implementation would override my Bar implementation, making my implementation inaccessible. On the other hand, if my package were "com.mydomain.foo" and I'd had by Bar class there, then you can freely use the name Bar in one of your classes and both classes could still be uniquely identified and used separately.
Why use the reverse domain name as the package name? I guess that is just a convention to make sure that everybody uses a unique namespace, as you shouldn't use someone else's domain in your package name.

Interface attributes not automatically replicated to class in UML class diagram in Visual Studio

In Visual Studio, I've created a UML class diagram with a class that realises an interface containing an attribute and an operation as thus:
The operation is automatically replicated to the class, but not the attribute. The MSDN guidelines indicate this behaviour:
When you create a realization connector, the operations of the interface are automatically replicated in the realizing class. If you add new operations to an interface, they are replicated in its realizing classes.
However, this seems counterintuitive to their statement just beforehand, namely:
Realization means that a class implements the attributes and operations specified by the interface.
I'm sure there must be a good technical reason for this (some OO concept like polymorphism or abstraction), but I can't think why it discerns between attributes and operations in this way.
Can anyone give me some insight into this, and perhaps what I should do to get round it (do I add the attributes to the class manually in UML?), as it's resulting in generated code that doesn't compile?
While I don't know for sure, I'd imagine it is because in C# interfaces cannot contain fields, only methods. Having attributes on an Interface therefore doesn't make sense.
Interfaces can contain properties, but these just get compiled to PropType get_PropName() and void set_PropName(PropType value). (Fun fact, trying to declare those methods yourself will generate a compiler error.)
Unfortunately, there is not a nice "out of the box" way of defining properties in UML class diagrams, as they are a language specific feature. I think you have to define a custom stereotype and the templates to generate the code accordingly - faff.

Properly Refactoring to avoid a Circular Dependency

I am having a problem with a circular dependency. Similar question have been asked and I have read a lot of answers. Most deal with a work-around but I would like to refactor so what I have it correct and I would like some input on where I have gone wrong. I can change what I am doing but not the entire project architecture.
I am using VB.Net in Visual Studio 2012.
I have two class libraries:
DataLayer for accessing the database.
DataObject which contains classes that represents my business objects.
My Presentation Layer calls methods in the DataLayer which returns objects from the DataObject class library.
(I have simplified somewhat – I actually have a controller layer but it needs references to the two class libraries above. This is an existing architecture that is from before my time.)
In the DataObject class library I have an abstract class that represents a file. It has properties such as filename, userID, etc. It also has a method, GetFile(), that I code in the derived classes because there are different ways of getting the file. A DataLayer method returns a collection of these file objects, but I don't want to get the actual file until it's needed.
So far, I have a derived class that calls a webService (using properties from the baseClass) and a derived class that accesses the fileSystem. Both return a byte array representing the file. The calling class does not need to know how the file is retrieved.
Now I have a new requirement to build the file on the fly using data from the database. I can get all the data I need using the properties in the base class.
My issue is that my GetFile() method will need to access my DataLayer class library to pull data from the database which causes a circular dependency. The DataLayer class library has a reference to DataObject since that is what it returns. But now I need to call the DataLayer from a class in DataObjects.
I could call the DataLayer from presentation and pass the result to
my DataObject’s GetFile() method, but then my presentation layer
needs to do something special for this derived class. My goal is
that the derived class handles GetFile without presentation knowing
about the implementation.
I could create a new class library for this DataLayer code but I
don't like a special case.
I could access the DB directly in the DataObject class but that
circumvents the layered architecture.
I can’t change our architecture, but I can change my approach.
Any opinions?
I think I have the answer.
In my concrete class, when I am loading the data initially (in the DataLayer), I will get all the data I need to create the file. I'll store it in a new property in my concrete class which my GetFile() method will use to build the file.
This has a little more overhead - I make DB calls and put all this data in memory when it may not be needed. I'll give it a try and see how performance is.
Any critiques of this approach?

How I can modularize Rails model?

I'm implementing several classes which does not have data by itself, just logics. These classes implements access control policy to date which depends on several parameters taken from data from other models.
I initially try to find answer to "Where to store such classes?" here, and the answer was apps/models directory. That's ok, but I like to clearly separate these classes from ActiveRecord inherited classes in hierarchy, both as file and class.
So, I created classes inside Logic module, like Logic::EvaluationLogic or Logic::PhaseLogic. I also wanted to have constants which passed between these logics. I prefer to place these constants into Logic module too. Thus, I implemented like this:
# in logic/phase_logic.rb
module Logic
PHASE_INITIAL = 0
PHASE_MIDDLE = 1000
class PhaseLogic
def self.some_phase_control_code
end
end
end
# in logic/evaluation_logic.rb
module Logic
class EvaluationLogic
def self.some_other_code
Logic::PhaseLogic.self.some_phase_control_code(Logic::PHASE_INITIAL)
end
end
end
Now, it work just fine with rspec (It passes tests I wrote without issues), but not with development server, since it can't find the Logic::PHASE_INITIAL constant.
I suspect it's related to the mismatch of the autoloading scheme of Rails and what I wanted to do. I tried to tweak rails, but no luck, ended-up with eliminating module Logic wrap.
Now the question I want to ask: How I can organize these classes with Rails?
I'm using 3.2.1 at this moment.
Posted a follow-up question "How I can organize namespace of classes in app/modules with rails?"
I am not sure whether I really understand your classes, but couldn't you create a Logic module or (I would rather do this:) PhaseLogic and EvaluationLogic objects in /lib directory?
It is not said that "Model" is always descendant of ActiveRecord. If the object belongs to "business logic" then it is a model. You can have models which do not touch database in any way. So, if your classes are "business objects", place them in 'app/models' and use like any other model.
Another question is whether you should use inheritance or modules - but I would rather think about including a module in PhaseLogic, and not about defining PhaseLogic in a module. Of course, all this depends heavily on the intended role of your objects.
Because in Ruby the class of object is not important, you do not need to use inheritance. If you want to 'plug' the logic objects into other objects, just take care that all '*Logic' classes have the required methods. I know that all I said is very vague, but I think I cannot give you some more concrete suggestions without knowing more about the role of these objects.
Ah, and one more thing!
If you find yourself fighting with Rails class autoloading, just use the old require "lib/logic.rb" in all the classes where you are using Logic::PHASE_INITIAL constants.
In this case I suppose that your problem was caused by different order of loading. The logic/evaluation_logic.rb has been loaded before logic/phase_logic.rb. The problem may disappear if you create logic.rb somewhere, where class autoloading can find it, and define these constants in that file.
Don't name your classes or modules Logic use specific names. Start with extracting logic into separate classes and then try to break them into smaller ones. Use namespaces to distinguish them from each other in lib folder, after this steps you would be able to extract some logic parts to separate gems and reduce codebase and complexity of application. Also take a look into presenter pattern.

Visual Studio code generation - how to deal with developers editing class files

So thanks to the Visualization and Modeling Feature Pack , I can build a uml model diagram and generate a bunch of classes.
But what now? Presumably, my developers will add code to those classes. Useful code, valuable code, and as the templates themselves indicate:
// Changes to this file will be lost if the code is regenerated.
So what is the best solution here? Can I make the modeling project reflect changes to the actual classes? Should I generate partial classes? Modify the default templates to read class files and not auto-generate anything that has been modified? Should I tell developers not to edit model files under pain of....well, pain?
Thanks for the tips.
As far as I know, this is really the key reason for partial classes in the first place. The custom code goes in one file, the auto-generated in another.
You could also create classes derived from the generated ones, and put any changes in there. I also agree with above poster that partial classes could be the way to go.
Although the tools generate basic skeleton classes out of the box, that's really just a starting point. You can easily adapt the generator templates to create your own stuff. Different people want to generate different code from the classes - some even generate XML or SQL. And yep, in C#, partial classes are good to generate, so's to keep the hand-written code separate from the generated bits.
It's good to put lots of extension points in the generated code, where you fill in the details by hand code.
Another neat idea is "double derived": from each UML class, generate a base class and a derived class. The derived one has only constructors. The base class has any methods you generate. So your hand code can easily override generated methods where you need that.
There are several options in the tool and recommending what is best is hard without knowing your scenario. Partial classes are great for some, but not all applications. If you want your UML class to generate a partial class, you can set it's C# stereotype's property to "Partial" and it will do so, and custom code can then be added in a partial class that won't be overwritten. If you want to prevent code from being overwritten, you can do this by setting the overwrite property to False on the template binding that corresponds to the package you are working on. This lets you set your extension code to be in a package that is not overwritten, while your model mastered code is overwritten with the latest model changes. Finally, if you want your code to be the master for your model so it always reflects the latest code, then you can reverse engineer your code by using the architecture explorer to select your classes and then dragging them in to a UML diagram. So for a given gesture, either the model is the master or the code is the master. In this version, we did not implement automated merge capabilities between the two.

Resources