Prism modularity practices - prism

I'm studying Prism and need to create a small demo app. I have some design questions. The differences between attitudes might be small, but I need to apply the practices to a large scale project later, so I'm trying to think ahead.
Assuming the classical DB related scenario - I need to get a list of employees and a double click on a list item gets extra information for that employee: Should the data access project be a module, or is a project accessed via repository pattern a better solution? What about large scale project, when the DB is more than one table and provides, say, information about employees, sales, companies etc.?
I'm currently considering to use the DataAccess module as a stand alone module, and have defined its interface in the Infrastructure project as well as its return type (EmployeeInformation). This means that both my DataAccess module and my application have to reference the Infrastructure project. Is this a good way to go?
I'm accessing said DataAccess module using ServiceLocator (MEF) from my application. Should the ServiceLocator be accessed by parts of the application, or is it meant to be used in the initialization section only?
Thanks.

A module is needed and makes sense when it contains ine part of the application that can live on it's own. This can be parts of an application the only several people need or are allowed to use, e.g. the user management module only administrators are allowed to access. But your data access layer is not that kind of isolated functionality that usually goes into a module. It is better placed in a common assembly the real modules can use. The problem here is that all modules depend on this DAL assembly, so have the task of updating your DAL in mind when designing your application (downward compatibility).
Usually there is no problem to have types that are broadly used reside in a common assembly. But this is not the infrastructure assembly. Infrastructure, as the word implies, provides services to have the modules work together. Your common types should go into something like YourNamespace.Types or YourNamespace.Client.Base or ...
This is a topic in many arguments and still unclear (at least from my point of view). Purists of Dependency Injection say it should only be used during initialization. Pragmatists are using the ServiceLocator all over their application.

Related

Symfony2 multiple kernel?

I have application which has core website, api and admin area. I wanted to know is it bad idea to have everything in one app or should I create different Symfony2 project or should I split them into different kernels?
I'm not sure if adding lots of bundles on same kernel will effect performance a lot or is just a little bit, which does not matter?
Following are options:
keep everything on same kernel, it wont make much difference
have multiple kernel for different part of application (api, admin and core website)
create different Symfony2 project for admin area and api.
or your wise words :)
You can define more "environments".
For example :
In AppKernel.php
public function registerBundles()
{
$bundles = array(
new Symfony\Bundle\FrameworkBundle\FrameworkBundle(),
new Symfony\Bundle\SecurityBundle\SecurityBundle(),
new Symfony\Bundle\TwigBundle\TwigBundle(),
new Symfony\Bundle\MonologBundle\MonologBundle(),
new Symfony\Bundle\SwiftmailerBundle\SwiftmailerBundle(),
new Doctrine\Bundle\DoctrineBundle\DoctrineBundle(),
new Sensio\Bundle\FrameworkExtraBundle\SensioFrameworkExtraBundle(),
//new AppBundle\AppBundle()
);
if (in_array($this->getEnvironment(), array('api'), true)) {
$bundles[] = new ApiBundle\ApiBundle();
//-- Other bundle
}
//-- Other environments
return $bundles;
}
}
It mostly depends on bundles quality. And this how much connected they are.
I would reject point 3 at start (create different Symfony2 project for admin area and api.) - as probably you don't build two separate applications.
Have multiple kernel for different part of application (api, admin and core website)
Common problem is created by Listeners and services in container. Especially when your listener should work only in one of app contexts (api/frontend/backend). Even if you remember to check it at very beginning of listener method (and do magic only in wanted context) then still listener can depend on injected services which need to be constructed and injected anyway. Good example here is FOS/RestBundle: even if you configure zones then still on frontend (when view_listener is activated for api) view_handler is initialized and injected to listener - https://github.com/FriendsOfSymfony/FOSRestBundle/blob/master/Resources/config/view_response_listener.xml#L11 I'm not sure for 100% here but also disabling translations and twig (etc.) for API (most of api's don't need it) will speed it up.
Creating separate Kernel for API context would solve that issue (in our project we use one Kernel and we had to disable that listener - as blackfire.io profiles were telling us that it saves ~15ms on every fronted request).
Creating new Kernel for API would make sure that none of API-only services/listeners will not interfere with frontend/backend rendering (it work both ways). But it will create for you additional work of creating shared components used in many bundles inside project (those from different kernels) - but in world with composer it's not a huge task anymore.
But it's case only for people who measure every millisecond of response time. And depends on your/3dparty bundles quality. If all there is perfectly ok then you don't need to mess with Kernels.
It's personal choice, but I have a similar project and I have a publicBundle, adminBundle and apiBundle all within the same project.
The extra performance hit is negliable but organisation is key ... that is why we're using an MVC package (Symfony) in the first place, is it not? :)
NB: You terminology is a little confusing, I think by Kernel you mean Bundle.
Have several kernels could not necessarily help.
Split your application in bundles and keep all advantages of sharing your entities (and so on) through the different parts of your application.
You can define separated routing/controllers/configuration that are loaded depending on the host/url.
Note :
If you are going to separate your app in two big bundles (i.e. Admin & Api),
and that the two share the same entities, you will surely have to do a choice.
This choice may involves that one of your bundles contains too much (and non related) logic and will need to be refactored in several bundles later.
Create a bundle per section of your application that corresponds to a set of related resources and make difference between the two parts through different contexts from configuration.
Also, name your classes/namespaces sensibly.

TFS source branching strategy for product based company

We have an ERP solution, which had lot of modules. As a product company we are having several clients.
The source control is maintained in TFS.
Now we customization for reach clients, what is the best way to manage the source code?
Truly I believe this depends on the language the application is built with:
MVC
In general, hopefully the business logic of all the applications is remaining the same, in that way the only differences would be in the view of each new instance. To manage these different views I would create sub-folders for each product. But once again this is very generic as we don't know much about the current infrastructure.
Edit
Generally Speaking the branching stragey described here will work for most cases: https://stackoverflow.com/a/20878555/5268586. For your application, if you are changing core logic located inside your application consider extending a base class for each client and overriding it's functionality to handle the new clients calculations.
i.e.
you have a class/file called CalcEngine
your client want to change some calculations, so make a new class Client1CalcEngine that will implement all of CalcEngine and override the one method/function. These customer focused classes can then be bundled in your normal branches, as the implementation logic will decypther which to use.

Best pattern to develop a erp with maven multi module

I have a ERP project with modules stock, purchases and sales. These are web applications, using hibernate, maven, springMVC and spring security. What is the best way to organize the structure of this project?
1 - Each application being a web module(.WAR);
2- Just one web module.
On both approach there are other modules: core(with daos and services), and commons(with shared utils classes).
I was using the first way, because is easier to split each project with your respectves programmers. But i had problem with spring security configurations.
Any other options?!?
Sorry about my english
Your question is very open and answers may be highly subjective. You're stating a very limited number of requirements, hence you can go either direction.
There is a lot to be said for compartmentalizing an application. There isn't many larger organizations around that haven't acknowledged the need to do so. Often that way is paved for people already, i.e. existing guidelines exist that specify the component boundaries. It helps with understanding (sub-)domains, having parallel projects in flight, and very importantly reuse.
That all doesn't mean much if you have no business or technical requirements prompting you to do so. Instead, as a developer, you may rather focus on being prepared for change (once new requirements arise). So, have (unit) tests in place, et cetera. Change should be easy.

Organizing application in layers

I’m developing a part of an application, named A. The application I want to plug my DLL into, called application B is in vb 6, and my code is in vb.net. (Application B will in time be converted to vb.net) My main question i, how is the best way for me to organize my code (application A)?
I want to split application A into layers (Service, Business, Data access), so it will be easy to integrate application A into B when B is converted to vb.net. I also want to learn about all the topics like layered architecture, patterns, inversion of dependency, entity framework and so on. Although my application (A) is small I want to organize my code in the best way.
The application I’m working with (A) is using web services for authenticating users and for sending schema to an organization. The user of application B is selecting a menu point in application B and then some functions in my application A is called.
In application A I have an auto generated schema class from an xsd schema. I fill this schema object with data and serialize the object to a memory string (is it a good solution to use memory string, I don’t have to save the data yet), wrap the xml inside a CDATA block and return the CDATA block as a string and assign the CDATA block to a string property of a web service.
I am also using Entity framework for database communication (to learn how this is done for the future work with application B). I have two entities in my .edmx, User and Payer.
I also want to use the repository pattern (is this a good choice?) to make a façade between the DAL and the BLL.
My application has functions for GeneratingSchema (filling the schema object with data), GetSchemaContent, GetSchemaInformation, GenerateCDATABlock, WriteToTextFile, MemoryStreamToString, EncryptData and some functions that uses web services, like SendShema, AuthenticateUser, GetAvalibelServises and so on.
I’m not sure where I should put it all?
I think I have to have some Interfaces like IRepository, ISchema (contract for the auto generated schema class, how can I do this?) ICryptoManager, IFileManager and so on, and classes that implements the interfaces.
My DAL will be the Entity framework. And I want a repository façade in my BLL (IRepository, UserRepository, PayerRepository) and classes for management (like the classes I have mention above) holding functions like WriteToFile, EncryptData …..
Is this a good solution (do I need a service layer, all my GUI is in application B) and how can I organize my layers, interfaces, classes an functions in Visual Studio?
Thanks in advance.
This is one heck of a question, thought I might try to chip away at a few parts for you so there's less for the next guy to answer...
For application B (VB6) to call application/assemblies A, I'm going to assume you're exposing the relevant parts of App A as COM Components, using ComVisibleAttributes and similar, much like described in this artcle. I only know of one other way (WCF over COM) but I've never tried it myself.
Splitting your solution(s) into various tiers and layers is a very subjective/debatable topic, and will always come down to a combination of personal preference, business requirements, time available, etc. However, regardless of the depth of your tiers and layers, it is good to understand the how and the why.
To get you started, here's a couple articles:
Wikipedia's general overview on "Multitier Architectures"
MSDN's very own "Building an N-Tier Application in .Net"
Inversion of Control is also a very good pattern to get into right now, with ever increasing (and brilliant!) resources becoming available to the .Net platform, it's definitely worth infesting some time to learn.
Although I haven't explored the full extent of IoC, I do love dependency injection(a type of IoC if I understand correctly though people seem to muddle the IoC/DI terms quite a lot). My personal preference for DI right now is the open source Ninject project, which has plenty of resources online and a reasonable wiki section talking you through the various aspects.
There are many more takes on DI and IoC, so I don't want to even attempt to provide you a comprehensive list for fear of being flamed for missing out somebody's favourite. Just have a search, see which you like the look of and have a play with it. Make sure to try a couple if you have the time.
Again, the Repository Pattern - often complemented well by the Unit of Work Pattern are also great topics to mull over for hours. I've seen a lot of good examples out on the inter-webs, and as many bad examples. My only advice here is to try it for yourself... see what works for you, develop a version of the patterns that suits you best and try to keep things consistent for maintainability.
For organising all these tiers and layers in VS, I recommend trying to keep all your independent tiers/layers in their own Solution Folders (r-click the Solution, Add New Solution Folder), or in some cases (larger projects) there own solutions and preferably an automated build service to update dependent projects with up to date assemblies as required. Again, a broad subject and totally down to personal preference. Just keep an eye out when designing your application for potential upcoming Circular References.
So, I'm afraid that doesn't even slightly answer your question, but hopefully provides you with some resources to check out and a few hours of reading.
Good luck!

Organize Project Solution w/ Interfaces for Repository

VS 2010 / C#
Trying to organize a solution and looking for options for naming the project that will host the interfaces for the repository.
I have:
MyProject.Domain
MyProject.WebUI
MyProject.Repositories
MyProject.Interfaces??
So far "Interfaces" is the best name i've come up with, but I don't like it. Any ideas/suggestions?
It is not too uncommon to see repository interfaces placed in the same assembly as the domain objects, themselves. This is what Jeffrey Palermo discusses in his series on The Onion Architecture. Personally, I do the same.
As for the reasoning behind it, I believe it is completely logical to define what the repository does in relation to the domain objects. Consideration as to the behavior of the repository is as heavily weighted as the domain itself, in my opinion. Assume that you have one team or developer who works on the domain model and defines the repository interfaces after working with the domain expert. It is their/his/her role to make sure that the knowledge is transferred about how the domain is related to the repositories, but not necessarily the repositories, themselves.
In doing so, handing off this assembly to anyone else on the team, the UoK (Unit of Knowledge, my own term) is constrained to the assembly. People writing the implementation of the repositories will then code against the transferred knowledge in the assembly. Since this UoK is unchanging based upon how the repository is implemented, from a data access standpoint, it logically goes into another assembly.

Resources