Should multiple service layer objects share a DAO? - dao

I have a Contact class that contains a PortalAccount object. When I want to create a "Portal Account" for a contact, an account is created remotely on a portal application using soap/axis and then the contact's portalAccount is populated and the contact is saved (local database holds information about the remote account, like user id and username, etc).
So I have a service class PortalServiceImpl that has methods to actually create a user on a remote portal, given a Contact instance.
Given all of this information, my question then is: should the PortalServiceImpl get an instance of a ContactDAO object and actually do the saving, or should the PortalServiceImpl class just create the remote user, modify the passed in Contact object, and let the client be responsible for saving?
Method 1:
class ServiceFacadeImpl {
public void createPortalAccount(Contact contact) {
// here the contact is implicitly saved
this.portalService.createPortalAccount(contact);
}
}
Method 2:
class ServiceFacadeImpl {
public void createPortalAccount(Contact contact) {
// here contact is implicitly modified
this.portalService.createPortalAccount(contact);
this.contactDAO.save(contact);
}
}
Both methods feel wrong to me. Method 1 feels wrong because the PortalService is creating a remote user AND saving the contact to the database (albeit through a DAO interface). Method 2 feels wrong because I have to assume that the PortalService is modifying the Contact I'm passing to it.
I also have a feeling that I'm not seeing some other gotchas, like potentially not handling transactions consistently.
(BTW, I've already used both methods, and don't want to continue refactoring in an endless circle. Something just seems wrong here.)

Are you sure it's a good idea that you have different contact IDs locally and remotely? It seems wrong to me, but maybe I just don't know your domain.
In my application all new contacts are sent through the webservice to remote portal and saved there. So, when I save new contact locally, it is sent to a remote portal and saved there. Maybe you need the same?
If the above thoughts are unacceptable for you, then I would do it like this:
class ServiceFacadeImpl {
public void CreatePortalAccountAndSaveContact(Contact contact) {
try
{
contact.portalAccount = this.portalService.createPortalAccount(contact);
this.contactDAO.save(contact);
}
catch(...)
{
// do cleanup, for example do you need to delete account from remote
// portal if it couldn't be saved locally?
// If yes, delete it from portal and set contact.portalAccount = null;
}
}
}
Some may say, that CreatePortalAccountAndSaveContact break single responsibility principle but imo in this situation it's absolutely normal because, as I understand, you need this operation to be atomic. Right?
Or you can add boolean flag to the method, indicating if you want to save contact. But if you always need to save contact with PortalAccount straight after getting it from remote portal - then boolean flag is not needed.
PS. Why do you use "this" keyword? Is portalService private member? If yes, then maybe you need to reconsider your naming convention and name private members with prefix "_" for example (I think it's the most popular one), like _portalService - then it will be easy to understand that _portalService is a private member. Sorry for offtopic.
Good luck.

Related

Best way to return error description to user

Suppose I need to register user in my system.
Buisness rules are:
email shoud be unique (a kind of identity);
name shouldn't be blank.
It looks like I need Service for it.
Probably something like that:
public interface RegistrationService {
bool Register(String email, String name);
}
And it's fine until I have to return failure reason to user.
How to deal with it?
I can see few options (but I don't like any of them):
Implement a kind of result object:
public interface RegistrationService {
RegistrationResult Register(String email, String name);
}
public interface RegistrationService {
bool Succes();
Error[] Errors();
User NewUser();
}
It's fine, and even could be useful for example for REST api.
But isn't it too cumbersome(especially considering that blank name probably should be checked at factory)?
Throw exceptions
public interface RegistrationService {
void Register(String email, String name) throws RegistrationError;
}
It looks a bit more accurate. But exception are expensive. Using them like this is looks like bad idea.
Use DB constraint. But it looks even more messy than (2).
let's start with point 3: DB constraints get their job done. Yes, the exception/error message is messy, I agree about that. But ask yourself: What is more messy: a terrible error message shown to 1 user or 2 user accounts with the same email adress that can corrupt your system? The DB constraint should be your last safety net. Your service needs to check if a user account with this email already exists. But what happens if in another thread somebody creates a user account with this email in the microsecond between your check and the creation of the new user account? You'll be happy about the DB constraint.
Yes, you could find a better solution, but that would require you to have a singleton service that serializes all account creation and makes sure that no two threads can create a user account at the same time.
Point 2: Exceptions are for exceptional situations. The situation that somebody wants to create a user account with an already used email is an exceptional situation. Don't be worried about costly operations in situations where somebody wants to do something dirty.
Point 1: I don't like this. But that's just my opinion. There are situations where a Result Object of this kind makes sense, but I try to Keep it to a Minimum.
Different layers can have different ways of considering and signalling a problem. Just because the Infrastructure raises an exception doesn't mean every other layer should do so.
You could have :
Infrastructure : throws duplicate key exception, because clients are not expected to threaten the DB's integrity
Application: catches the exception and returns a simple RegistrationResult.Failure value, because it's an expected failure case
Presentation: returns HTTP 409 conflict

Is it bad to use ViewModelLocator to Grab other VM's for use in another Vm?

I am using MVVM light and figured out since that the ViewModelLocator can be used to grab any view model and thus I can use it to grab values.
I been doing something like this
public class ViewModel1
{
public ViewModel1()
{
var vm2 = new ViewModelLocator().ViewModel2;
string name = vm2.Name;
}
}
This way if I need to go between views I can easily get other values. I am not sure if this would be best practice though(it seems so convenient makes me wonder if it is bad practice lol) as I know there is some messenger class thing and not sue if that is the way I should be doing it.
Edit
static ViewModelLocator()
{
ServiceLocator.SetLocatorProvider(() => SimpleIoc.Default);
SimpleIoc.Default.Register<ViewModel1>();
SimpleIoc.Default.Register<ViewModel2>();
}
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Performance",
"CA1822:MarkMembersAsStatic",
Justification = "This non-static member is needed for data binding purposes.")]
public ViewModel1 ViewModel1
{
get
{
return ServiceLocator.Current.GetInstance<ViewModel1 >();
}
}
Edit
Here is a scenario that I am trying to solve.
I have a view that you add price and store name to. When you click on the textbox for store name you are transferred to another view. This view has a textbox that you type the store you are looking for, as you type a select list get populated with all the possible matches and information about that store.
The user then chooses the store they want. They are transferred back to the view where they "add the price", now the store name is filled in also.
If they hit "add" button it takes the price, the store name, and the barcode(this came from the view BEFORE "add price view") and sends to a server.
So as you can see I need data from different views.
I'm trying to understand what your scenario is. In the MVVMlight forum, you added the following context to this question:
"I have some data that needs to be passed to multiple screens and possibly back again."
For passing data between VMs, I would also - as Matt above - use the Messenger class of MVVMLight as long as it is "fire and forget". But it is the "possibly back again" comment that sounds tricky.
I can imagine some scenarios where this can be needed. Eg. a wizard interface. In such a case I would model the data that the wizard is responsible for collecting and then bind all Views to the same VM, representing that model object.
But that's just one case.
So maybe if you could provide a little more context, I would be happy to try and help.
Yes, you can do this, in as much as the code will work but there is a big potential issue you may run into in the future.
One of the strong arguments for using the MVVM pattern is that it makes it easier to write code that can be easily tested.
With you're above code you can't test ViewModel1 without also having ViewModelLocator and ViewModel2. May be that's not too much of a bad thing in and of itself but you've set a precedent that this type of strong coupling of classes is acceptable. What happens, in the future, when you
From a testing perspective you would probably benefit from being able to inject your dependencies. This means passing, to the constructor--typically, the external objects of information you need.
This could mean you have a constructor like this:
public ViewModel1(string vm2Name)
{
string name = vm2Name;
}
that you call like this:
var vm1 = new ViewModel1(ViewModelLocator.ViewModel2.name);
There are few other issues you may want to consider also.
You're also creating a new ViewModelLocator to access one of it's properties. You probably already have an instance of the locator defined at the application level. You're creating more work for yourself (and the processor) if you're newing up additional, unnecessary instances.
Do you really need a complete instance of ViewModel2 if all you need is the name? Avoid creating and passing more than you need to.
Update
If you capture the store in the first view/vm then why not pass that (ID &/or Name) to the second VM from the second view? The second VM can then send that to the server with the data captured in the second view.
Another approach may be to just use one viewmodel for both views. This may make your whole problem go away.
If you have properties in 1 view or view model that need to be accessed by a second (or additional) views or view models, I'd recommend creating a new class to store these shared properties and then injecting this class into each view model (or accessing it via the locator). See my answer here... Two views - one ViewModel
Here is some sample code still using the SimpleIoc
public ViewModelLocator()
{
ServiceLocator.SetLocatorProvider(() => SimpleIoc.Default);
SimpleIoc.Default.Register<IMyClass, MyClass>();
}
public IMyClass MyClassInstance
{
get{ return ServiceLocator.Current.GetInstance<IMyClass>();}
}
Here is a review of SimpleIOC - how to use MVVMLight SimpleIoc?
However, as I mentioned in my comments, I changed to use the Autofac container so that my supporting/shared classes could be injected into multiple view models. This way I did not need to instantiate the Locator to access the shared class. I believe this is a cleaner solution.
This is how I registered MyClass and ViewModels with the Autofac container-
var builder = new ContainerBuilder();
var myClass = new MyClass();
builder.RegisterInstance(myClass);
builder.RegisterType<ViewModel1>();
builder.RegisterType<ViewModel2>();
_container = builder.Build();
ServiceLocator.SetLocatorProvider(() => new AutofacServiceLocator(_container));
Then each ViewModel (ViewModel1, ViewModel2) that require an instance of MyClass just add that as a constructor parameter as I linked initially.
MyClass will implement PropertyChanged as necessary for its properties.
Ok, my shot at an answer for your original question first is: Yes, I think it is bad to access one VM from another VM, at least in the way it is done in the code example of this question. For the same reasons that Matt is getting at - maintainability and testability. By "newing up" another ViewModelLocator in this way you hardcode a dependency into your view model.
So one way to avoid that is to consider Dependency Injection. This will make your dependencies explicit while keeping things testable. Another option is to use the Messenger class of MVVMLight that you also mention.
In order to write maintainable and testable code in the context of MVVM, ViewModels should be as loosely coupled as possible. This is where the Messenger of MVVMLight can help. Here's a quote from Laurent on what Messenger class was intended for:
I use it where decoupled communication must take place. Typically I use it between VM and view, and between VM and VM. Strictly speaking you can use it in multiple places, but I always recommend people to be careful with it. It is a powerful tool, but because of the very loose coupling, it is easy to lose the overview on what you are doing. Use it where it makes sense, but don't replace all your events and commands with messages.
So, to answer the more specific scenario you mention, where one view pops up another "store selection" view and the latter must set the current store when returning back to the first view, this is one way to do it (the "Messenger way"):
1) On the first view, use EventToCommand from MVVMLight on the TextBox in the first view to bind the desired event (eg. GotFocus) to a command exposed by the view model. Could be eg. named OpenStoreSelectorCommand.
2) The OpenStoreSelectorCommand uses the Messenger to send a message, requesting that the Store Selector dialog should be opened. The StoreSelectorView (the pop-up view) subscribes to this message (registers with the Messenger for that type of message) and opens the dialog.
3) When the view closes with a new store selected, it uses the Messenger once again to publish a message that the current store has changed. The main view model subscribes to this message and can take whatever action it needs when it receives the message. Eg. update a CurrentStore property, which is bound to a field on the main view.
You may argue that this is a lot of messaging back and forth, but it keeps the view models and views decoupled and does not require a lot code.
That's one approach. That may be "old style" as Matt is hinting, but it will work, and is better than creating hard dependencies.
A service-based approach:
For a more service-based approach take a look at this recent MSDN Magazine article, written by the inventor of MVVMLight. It gives code examples of both approaches: The Messenger approach and a DialogService-based approach. It does not, however, go into details on how you get values back from a dialog window.
That issue is tackled, without relying on the Messenger, in this article. Note the IModalDialogService interface:
public interface IModalDialogService
{
void ShowDialog<TViewModel>(IModalWindow view, TViewModel viewModel, Action<TViewModel> onDialogClose);
void ShowDialog<TDialogViewModel>(IModalWindow view, TDialogViewModel viewModel);
}
The first overload has an Action delegate parameter that is attached as the event handler for the Close event of the dialog. The parameter TViewModel for the delegate is set as the DataContext of the dialog window. The end result is that the view model that caused the dialog to be shown initially, can access the view model of the (updated) dialog when the dialog closes.
I hope that helps you further!

ASP.Net MVC 3: Custom data binder to transform data coming from/going to the DB

I am working on an ASP.Net MVC 3 project where I would like to encrypt all emails stored in a database for additional protection in case some hacker would ever get access to the db, and I was wondering what was the best way to achieve this.
I read a bit about custom model binders, but this is for the binding between the controller and the view. I am not sure if this is what I want, since I may need to have access to unencrypted email addresses in the code (in the Service Layer, where I have the Business Rules). So I would have preferred the encryption/decryption to occur automatically when the model is saved to/loaded from the database, and this is what I don't know how to do.
We can imagine that I have this POCO model:
public partial class Contact
{
public virtual int ContactId { get; set; }
public virtual string Name { get; set; }
public virtual string Email { get; set; }
}
What I need is a way to have the Email property encrypted when it is persisted to the database and decrypted when it is loaded from the database.
One way to do it would be to have an extra UnencryptedEmail property in my Contact model that would have a getter and a setter that would decrypt and encrypt the Email property, but I find that having to add an extra property is not as clean a solution.
If, for some reason, using a custom IModelBinder is the way to go, please let me know why and tell me how to get it to be applied only on the Email property of the Contact model. Up to now, I have only seen implementations for applying transformations on all properties of a specific data type.
Consider using the Model View approach instead of directly binding to models and displaying them in the Views.
As for encryption and decryption there are tons of approaches you can employ.
I can see what you are looking for, instead of answering and explaining the whole stuff, I can point you to a related material which is not exactly what your requirement is but you can take a cue from it.
http://codefirstmembership.codeplex.com/
In the above code first membership provider code, the passwords are hashed and stored in database and for comparison the hashing is removed and then they are compared.
I understand it will be time consuming but its worth to take a look at.
I don't think the model binder is the right way to go. The encryption of an email sounds like a business requirement and as such I would place it in the business layer.
When storing the email, your business layer would get the plain email address as input from the application layer, encrypt it and pass the encrypted value to the repository.
When retrieving the email, your business layer would receive the email in an encrypted state from the repository, decrypt it and pass it back to the application layer.
Unless you require it, the application layer would not need to know about the encrypted version of the email as it only deals with the plain version of it. On the other end the repository would not need to know about the decrypted version of the email as it only needs to deal with the encrypted version of it. To that end the business layer does sound like the best place to handle this.

Where does the query language sit within the MVC pattern?

I'd assume that since the query language sits within the controller (typically) that it belongs to that component, but if I play devil's advocate I'd argue that the query language is execute within the domain of the model, and is tightly coupled to that component so it might also be a part of it.
Anyone know the answer? Is there a straight answer or is it technology specific?
Both are legitimate ways to implement it. The question is what and how you need to expose your application to its users. In Patterns Of Enterprise Application Architecture (figured again, that I really like to quote that book ;) they offer two ways to implement the Domain Model:
Implement all application logic (and therefore the query language) in the Model. That makes your domain model very use case specific (you can't reuse it that easily since it already has some application logic and dependency to the backend storage being used) but it might be more appropriate for complex domain logic.
Put the application logic in another layer (which can be the Service Layer being used by the controller in the MVC pattern or the controller directly). That usually makes your model objects plain data containers and you don't need to expose the whole (probably complex) domain model structure to your users (you can make the interface as simple as possible).
As a code example:
// This can also be your controller
class UserService
{
void save(User user)
{
user.password = md5(user.password)
// Do the save query like "INSER INTO users"... or some ORM stuff
}
}
class User
{
String username;
String password;
// The following methods might be added if you put your application logic in the domain model
void setPassword(String password)
{
// Creates a dependency to the hashing algorithm used
this.password = md5(password)
}
void save()
{
// This generates a dependency to your backend even though
// the user object doesn't need to know to which backend it gets saved
// INSET INTO users
}
}

Encapsulation Aggregation / Composition

The Wikipedia article about encapsulation states:
"Encapsulation also protects the integrity of the component, by preventing users from setting the internal data of the component into an invalid or inconsistent state"
I started a discussion about encapsulation on a forum, in which I asked whether you should always clone objects inside setters and/or getters as to preserve the above rule of encapsulation. I figured that, if you want to make sure the objects inside a main object aren't tampered with outside the main object, you should always clone it.
One discussant argued that you should make a distinction between aggregation and composition in this matter. Basically what I think he ment is this:
If you want to return an object that is part of a composition (for instance, a Point of a Rectangle), clone it.
If you want to return an object that is part of aggregation (for instance, a User as part of a UserManager), just return it without breaking the reference.
That made sense to me too. But now I'm a bit confused. And would like to have your opinions on the matter.
Strictly speaking, does encapulation always mandate cloning?
PS.: I program in PHP, where resource management might be a little more relevant, since it's a scripted language.
Strictly speaking, does encapulation always mandate cloning?
No, it does not.
The person you mention is probably confusing the protection of the state of an object with the protection of the implementation details of an object.
Remember this: Encapsulation is a technique to increase the flexibility of our code. A well encapsulated class can change its implementation without impacting its clients. This is the essence of encapsulation.
Suppose the following class:
class PayRoll {
private List<Employee> employees;
public void addEmployee(Employee employee) {
this.employees.add(employee);
}
public List<Employee> getEmployees() {
return this.employees;
}
}
Now, this class has low encapsulation. You can say the method getEmployees breaks encapsulation because by returning the type List you can no longer change this detail of implementation without affecting the clients of the class. I could not change it for instance for a Map collection without potentially affecting client code.
By cloning the state of your object, you are potentially changing the expected behavior from clients. This is a harmful way to interpret encapsulation.
public List<Employee> getEmployees() {
return this.employees.clone();
}
One could say the code above improves encapsulation in the sense that now addEmployee is the only place where the internal List can be modified from. So If I have a design decision to add the new Employee items at the head of the List instead of at the tail. I can do this modification:
public void addEmployee(Employee employee) {
this.employees.insert(employee); //note "insert" is used instead of "add"
}
However, that is a small increment of the encapsulation for a big price. Your clients are getting the impression of having access to the employees when in fact they only have a copy. So If I wanted to update the telephone number of employee John Doe I could mistakenly access the Employee object expecting the changes to be reflected at the next call to to the PayRoll.getEmployees.
A implementation with higher encapsulation would do something like this:
class PayRoll {
private List<Employee> employees;
public void addEmployee(Employee employee) {
this.employees.add(employee);
}
public Employee getEmployee(int position) {
return this.employees.get(position);
}
public int count() {
return this.employees.size();
}
}
Now, If I want to change the List for a Map I can do so freely.
Furthermore, I am not breaking the behavior the clients are probably expecting: When modifying the Employee object from the PayRoll, these modifications are not lost.
I do not want to extend myself too much, but let me know if this is clear or not. I'd be happy to go on to a more detailed example.
No, encapsulation simply mandates the ability to control state by creating a single access point to that state.
For example if you had a field in a class that you wanted to encapsulate you could create a public method that would be the single access point for getting the value that field contains. Encapsulation is simply this process of creating a single access point around that field.
If you wish to change how that field's value is returned (cloning, etc.) you are free to do so since you know that you control the single avenue to that field.

Resources