I search with reflector and I didn't manage to find a case where the ValidationResult.MemberNames is supposed to contain more than one value.
So, first of all I am wondering why MS had to do it IEnumerable<string>, then now that they already did this, can I rely that this property will only return one value?
Update
Concerning the DataAnnotations validation system I find more sloppiness:
The TryValidateProperty and TryValidateObject should have removed the errors from the validationResults parameter if they don't exist any more.
ValidationResult should have overriden Equals and GetHashCode.
Why is the ValidationResult.ErrorMessage mutable!? I can't event build an EqualityComparer myself!
If the DataTypeAttribute is only used for representation concerns, why does it inherit ValidationAttribute, that's just misleading, I had to struggle till I understood (after reflectoring) that its not going to work. MS just didn't implement it.
And the list goes on.
Consider Password and PasswordConfirmation. Or any Start/Stop values, or any other cross-field validation.
Related
I'm deleting an instance of an entity and depending on the value of an option set in it, I wish to carry our different course of action. The problem is that the field isn't changed, hence, not provided to the plugin's target.
How can I easily tell the stupid plugin to fetch all the fields?
The way I do it now is to use pre-image but I'll be showing the plugin to some rookies and they will definitely not like it. And they won't believe me that's the way to go, for sure, because they're a cocky bunch.
Is there a work-around for that?
Using the pre-image is the suggested way in this scenario, the alternative is to instantiate a service factory in order to get an IOrganizationService and retrieve the entity using the target's Id.
It is part of the IPluginExecutionContext (of which Target is one part.) I think the beginners are confused if they think of Target as anything more than a property of IPluginExecutionContext.
It wouldn't make sense to have these values as part of Target, because then it would cause an update of the field to its current value - if you forced it into Target you would see the update in the audit details.
Thus, CRM has PreEntityImages, Target, and PostEntityImages, if Target was used the way "they" want it would not be able to differentiate between values being updated, previous values, and the final result of the entity.
As far as I can see, the validation within Entity Framework is built entirely around the assumption that, if an item fails its validation, it must not be persisted to the database. Is there any mechanism, possibly running parallel to normal validation, of making a constraint on a field produce a warning to the user, rather than an error which prevents the record from being saved/updated?
To be more specific, I have a situation where a particular numerical field has limits on it, but these are advisory rather than hard-and-fast. If the user enters a value outside these limits, they should get a warning, but should still be able to save the record.
In theory, I could subclass the ValidationResult class to make, say, a ValidationWarning class, then create a custom subclass of ValidationResults whose IsValid property was sensitive to the presence of ValidationWarning messages, and ignored them in deciding whether the entity is valid. However, this requirement has arisen in a project which is already someway along in its development, and it would require a lot of refactoring to make this kind of custom subclassing work properly. I would prefer to find a mechanism which could be levered in without creating that much disruption/rework.
I had a similar requirement on a project and how I solved it was this. If (ModelState.IsValid) is false, I cleared out the errors out of the ModelState and sent it on its way again,then logged the "error" to another service. This is a bit of a hack and I would'nt recommend doing as it is not exactly best practice.
I have been struggling to get fetched properties working correctly in my app and have been finding it extremely confusing - mainly due to this strange issue I have finally figured out!
Basically if I change the Predicate on a Fetched Property in my xcdatamodeld and then Build-Run the app ignores this new Predicate and continue to used the old Predicate.
Hard to describe how absolutely annoying and frustrating this is, but am sure I am not the first to encounter this.
Any idea's on how I can force this to update the changes with each rebuild?
Ok so according to Apple's Core Data versioning guidelines, two versions are treated as being identifical if:
For each entity the following attributes must be equal: name, parent, isAbstract, and properties. className, userInfo, and validation predicates are not compared.
For each property in each entity, the following attributes must be equal: name, isOptional, isTransient, isReadOnly, for attributes attributeType, and for relationships destinationEntity, minCount, maxCount, deleteRule, and inverseRelationship.
So looks like changing a fetched property's predicate doesn't qualify as a 'change' ... how wonderfully confusing.
You can force it to consider the model 'changed' by changing the value of the Core Data Model Identifier
This question may have already been asked, sorry
I'm looking at the architecture for validating our model. Our simple validation can be achieved by using the property validation attributes (some custom) and using
ModelState.IsValid
however the problem is when validation requires access to the database or access to another property. A perfect example is to check for duplicate names. In this case we need to check the database for duplicate names where the id is not equal to that of the current object (for updates)
If we were to write this as an validation attribute to be applied to the name property this would cause to problems. Ome how do we get access to the database and two how would we get access to the id property.
So in conclusion. Is there any examples of good ways to architect a fix to this problem?
I spent some time exploring this today for a project I was working on and came to these conclusions.
It is not to bad to solve the how, much of it involves some reflection and using the validation context to inspect and access other properties of your model or using IValidationObject. The real question becomes is it okay to do validation that requires database interaction.
For one I was concerned about performance, in one particular case a validation made a query that returned an object to ensure it existed which I later needed for relationship assignment which would then cause another query.
Secondly you need to think about database concurrency. The best way to do duplication checks is during insert not before because the database could change between the two operations. This also relates to the first reason, an object could be deleted immediately after a database reported it exists.
In my particular project I felt it better to keep this sort of behavior with modifying my EF context and adding anything that went wrong to the ModelState.
I often see people validating domain objects by creating rule objects which take in a delegate to perform the validation. Such as this example": http://www.codeproject.com/KB/cs/DelegateBusinessObjects.aspx
What I don't understand is how is this advantageous to say just making a method?
For example, in that particular article there is a method which creates delegates to check if the string is empty.
But is that not the same as simply having something like:
Bool validate()
{
Result = string.IsNullOrEmpty(name);
}
Why go through the trouble of making an object to hold the rule and defining the rule in a delegate when these rules are context sensitive and will likely not be shared. the exact same can be achieved with methods.
There are several reasons:
SRP - Single Responsibility Principle. An object should not be responsible for its own validation, it has its own responsibility and reasons to exist.
Additionally, when it comes to complex business rules, having them explicitly stated makes validation code easier to write and understand.
Business rules also tend to change quite a lot, more so than other domain objects, so separating them out helps with isolating the changes.
The example you have posted is too simple to benefit from a fully fledged validation object, but it is very handy one systems get large and validation rules become complex.
The obvious example here is a webapp: You fill in a form and click "submit". Some of your data is wrong. What happens?
Something throws an exception. Something (probably higher up) catches the exception and prints it (maybe you only catch UserInputInvalidExceptions, on the assumption that other exceptions should just be logged). You see the first thing that was wrong.
You write a validate() function. It says "no". What do you display to the user?
You write a validate() function which returns (or throws an exception with, or appends to) a list of messages. You display the messages... but wouldn't it be nice to group by field? Or to display it beside the field that was wrong? Do you use a list of tuple or a tuple of lists? How many lines do you want a rule to take up?
Encapsulating rules into an object lets you easily iterate over the rules and return the rules that were broken. You don't have to write boilerplate append-message-to-list code for every rule. You can stick broken rules next to the field that broke them.