Enterprise Library Validation Block - Should validation be placed on class or interface? - validation

I am not sure where the best place to put validation (using the Enterprise Library Validation Block) is? Should it be on the class or on the interface?
Things that may effect it
Validation rules would not be changed in classes which inherit from the interface.
Validation rules would not be changed in classes which inherit from the class.
Inheritance will occur from the class in most cases - I suspect some fringe cases to inherit from the interface (but I would try and avoid it).
The interface main use is for DI which will be done with the Unity block.

The way you are trying to use the Validation Block with DI, I dont think its a problem if you set the attributes at interface level. Also, I dont think it should create problems in the inheritance chain. However, I have mostly seen this block used at class level, with an intent to keep interfaces not over specify things. IMO i dont see a big threat in doing this.

Be very careful here, your test is too simple.
This will not work as you expect for SelfValidation Validators or Class Validators, only for the simple property validators like you have there.
Also, if you are using the PropertyProxyValidator in an ASP.NET page, iI don;t believe it will work either, because it only looks a field validators, not inherited/implemented validators...
Yes big holes in the VAB if you ask me..

For the sake of completeness I decided to write a small test to make sure it would work as expected and it does, I'm just posting it here in case anyone else wants it in future.
using System;
using Microsoft.Practices.EnterpriseLibrary.Validation;
using Microsoft.Practices.EnterpriseLibrary.Validation.Validators;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
ISpike spike = new Spike();
spike.Name = "A really long name that will fail.";
ValidationResults r = Validation.Validate<ISpike>(spike);
if (!r.IsValid)
{
throw new InvalidOperationException("Validation error found.");
}
}
}
public class Spike : ConsoleApplication1.ISpike
{
public string Name { get; set; }
}
interface ISpike
{
[StringLengthValidator(2, 5)]
string Name { get; set; }
}
}

What version of Enterprise Library are you using for your code example? I tried it using Enterprise Library 5.0, but it didn't work.
I tracked it down to the following section of code w/in the EL5.0 source code:
[namespace Microsoft.Practices.EnterpriseLibrary.Validation]
[public static class Validation]
public static ValidationResults Validate<T>(T target, ValidationSpecificationSource source)
{
Type targetType = target != null ? target.GetType() : typeof(T);
Validator validator = ValidationFactory.CreateValidator(targetType, source);
return validator.Validate(target);
}
If the target object is defined, then target.GetType() will return the most specific class definition, NOT the interface definition.
My workaround is to replace your line:
ValidationResults r = Validation.Validate<ISpike>(spike);
With:
ValidationResults r ValidationFactory.CreateValidator<ISpike>().Validate(spike);
This got it working for me.

Related

Does Processing 3 have class declarations?

Our school project has us make a game using Processing 3. After some studying with the language, our team is pretty confident we can work with the project, though we do have certain reservations of the chosen language.
However, there is one major question we are wondering and could not find an answer. In C++ and many others, when you create a new class in a new file you also create a header file you can include. Does Processing 3 have something similar? I know you can "include" files by adding more tabs, which is still weird but whatever. We would like to have some sort of declarations in advance so we can comment/describe classes and their methods, rather than force each member go through lots of code to find the proper point.
In short, we want to be able to do something like this:
Example.pde
class Example {
//Description
Example();
//Description
void doSomething(int variable);
}
//Other team members don't have to worry past this,
//as they have already seen the public interface
Example::Example()
{
//Constructor
}
void Example::doSomething(int variable)
{
//Function
}
Or do we have to always to like this:
Example.pde
class Example {
//Description
Example()
{
//Constructor
}
//Description
void doSomething(int variable)
{
//Function
}
}
Processing is written in Java, so you can only do things that Java supports. Java does not support header files, so no, you can't use header files in Processing.
However, it sounds like what you're really looking for is an interface.
interface Example {
void doSomething(int variable);
}
class MyExample implements Example{
public MyExample(){
//Constructor
}
void doSomething(int variable){
//Function
}
}
With this, you would only need to show other team members the interface, not the class. As long as they program to the interface, they don't need to ever see the class implementation.
More info on interfaces can be found in the Processing reference.

Is it possible to provide UI selectable options to custom msbuild tasks?

I have built a custom msbuild task that I use to convert 3D models in the format I use in my engine. However there are some optional behaviours that I would like to provide. For example allowing the user to choose whether to compute the tangent array or not, whether to reverse the winding order of the indices, etc.
In the actual UI where you select the Build action for each file, is it possible to define custom fields that would then be fed to the input parameters of the task? Such as a "Compute Tangents" dropbox where you can choose True or False?
If that is possible, how? Are there any alternatives besides defining multiple tasks? I.e. ConvertModelTask, ConvertModelComputeTangentTask, ConvertModelReverseIndicesTask, etc.
Everything in a MsBuild Custom Task, has to have "settable properties" to drive behavior.
Option 1.
Define an ENUM-esque to drive you behavior.
From memory, the MSBuild.ExtensionPack.tasks and MSBuild.ExtensionPack.Xml.XmlFile TaskAction="ReadElementText" does this type of thing.
The "TaskAction" is the enum-esque thing. I say "esque", because all you can do on the outside is set a string. and then in the code, convert the string to an internal enum.
See code here:
http://searchcode.com/codesearch/view/14325280
Option 2: You can still use OO on the tasks. Create a BaseTask (abstract) for shared logic), and then subclass it, and make the other class a subclass, and the msbuild task that you call.
SvnExport does this. SvnClient is the base class. And it has several subclasses.
See code here:
https://github.com/loresoft/msbuildtasks/blob/master/Source/MSBuild.Community.Tasks/Subversion/SvnExport.cs
You can probably dive deep with EnvDTE or UITypeEditor but since you already have a custom task why not keep it simple with a basic WinForm?
namespace ClassLibrary1
{
public class Class1 : Task
{
public bool ComputeTangents { set { _computeTangents = value; } }
private bool? _computeTangents;
public override bool Execute()
{
if (!_computeTangents.HasValue)
using (var form1 = new Form1())
{
form1.ShowDialog();
_computeTangents = form1.checkBox1.Checked;
}
Log.LogMessage("Compute Tangents: {0}", _computeTangents.Value);
return !Log.HasLoggedErrors;
}
}
}

protobuf-net concurrent performance issue in TakeLock

We're using protobuf-net for sending log messages between services. When profiling stress testing, under high concurrency, we see very high CPU usage and that TakeLock in RuntimeTypeModel is the culprit. The hot call stack looks something like:
*Our code...*
ProtoBuf.Serializer.SerializeWithLengthPrefix(class System.IO.Stream,!!0,valuetype ProtoBuf.PrefixStyle)
ProtoBuf.Serializer.SerializeWithLengthPrefix(class System.IO.Stream,!!0,valuetype ProtoBuf.PrefixStyle,int32)
ProtoBuf.Meta.TypeModel.SerializeWithLengthPrefix(class System.IO.Stream,object,class System.Type,valuetype ProtoBuf.PrefixStyle,int32)
ProtoBuf.Meta.TypeModel.SerializeWithLengthPrefix(class System.IO.Stream,object,class System.Type,valuetype ProtoBuf.PrefixStyle,int32,class ProtoBuf.SerializationContext)
ProtoBuf.ProtoWriter.WriteObject(object,int32,class ProtoBuf.ProtoWriter,valuetype ProtoBuf.PrefixStyle,int32)
ProtoBuf.BclHelpers.WriteNetObject(object,class ProtoBuf.ProtoWriter,int32,valuetype
ProtoBuf.BclHelpers/NetObjectOptions)
ProtoBuf.Meta.TypeModel.GetKey(class System.Type&)
ProtoBuf.Meta.RuntimeTypeModel.GetKey(class System.Type,bool,bool)
ProtoBuf.Meta.RuntimeTypeModel.FindOrAddAuto(class System.Type,bool,bool,bool)
ProtoBuf.Meta.RuntimeTypeModel.TakeLock(int32&)
[clr.dll]
I see that we can use the new precompiler to get a speed boost, but I'm wondering if that will get rid of the issue (sounds like it doesn't use reflection); it would be a bit of work for me to integrate this, so I haven't tested it yet. I also see the option to call Serializer.PrepareSerializer. My initial (small scale) testing didn't make the prepare seem promising.
A little more info about the type we're serializing:
[ProtoContract]
public class SomeMessage
{
[ProtoMember(1)]
public SomeEnumType SomeEnum { get; set; }
[ProtoMember(2)]
public long SomeId{ get; set; }
[ProtoMember(3)]
public string SomeString{ get; set; }
[ProtoMember(4)]
public DateTime SomeDate { get; set; }
[ProtoMember(5, DynamicType = true, OverwriteList = true)]
public Collection<object> SomeArguments
}
Thanks for your help!
UPDATE 9/17
Thanks for your response! We're going to try the workaround you suggest and see if that fixes things.
This code lives in our logging system so, in the SomeMessage example, SomeString is really a format string (e.g. "Hello {0}") and the SomeArguments collection is a list of objects used to fill in the format string, just like String.Format. Before we serialize, we look at each argument and call DynamicSerializer.IsKnownType(argument.GetType()), if it isn't known, we convert it to a string first. I haven't looked at the ratios of data, but I'm pretty sure we have a lot of different strings coming in as arguments.
Let me know if this helps. If you need, I'll try to get more details.
TakeLock is only used when it is changing the model, for example because it is seeing a type for the first time. You shouldn't normally see TakeLock after the first time a particular type has been used. In most cases, using Serializaer.PrepareSerializer<SomeMessage>() should perform all the necessary initialization (and similar for any other contracts you are using).
However! I wonder if perhaps this is also related to your use of DynamicType; what are the actual objects being used here? It might be that I need to tweak the logic here, so that it doesn't spend any time on that step. If you let me know the actual objects (so I can repro), I will try to run some tests.
As for whether the precompiler would change this; yes it would. A fully compiled static model has a completely different implementation of the ProtoBuf.Meta.TypeModel.GetKey method, so it would never call TakeLock (you don't need to protect a model that can never change!). But you can actuallydo something very similar without needing to use precompile. Consider the following, run as part of your app's initialization:
static readonly TypeModel serializer;
...
var model = TypeModel.Create();
model.Add(typeof(SomeMessage), true);
// TODO add other contracts you use here
serializer = model.Compile();
This will create a fully static-compiled serializer assembly in memory (instead of a mutable model with individual operations compiled). If you now use serializer.Serialize(...) instead of Serializer.Serialize (i.e. the instance method on your stored TypeModel rather than the static method on Serializer) then it will essentially be doing something very similar to "precompiler", but without the need to actualy precompile it (obviously this will only be available on "full" .NET). This will then never call TakeLock, as it is running a fixed model, rather than a flexible model. It does, however, require you to know what contract-types you use. You could use reflection to find these, by looking for all those types with a given attribute:
static readonly TypeModel serializer;
...
var model = TypeModel.Create();
Type attributeType = typeof(ProtoContractAttribute);
foreach (var type in typeof(SomeMessage).Assembly.GetTypes()) {
if (Attribute.IsDefined(type, attributeType)) {
model.Add(type, true);
}
}
serializer = model.Compile();
But emphasis: the above is a workaround; it sounds like there's a glitch, which I'll happily investigate if I can see an example where it actually happens; most importantly: what are the objects in SomeArguments?

MongoDB - override default Serializer for a C# primitive type

I'd like to change the representation of C# Doubles to rounded Int64 with a four decimal place shift in the serialization C# Driver's stack for MongoDB. In other words, store (Double)29.99 as (Int64)299900
I'd like this to be transparent to my app. I've had a look at custom serializers but I don't want to override everything and then switch on the Type with fallback to the default, as that's a bit messy.
I can see that RegisterSerializer() won't let me add one for an existing type, and that BsonDefaultSerializationProvider has a static list of primitive serializers and it's marked as internal with private members so I can't easily subclass.
I can also see that it's possible to RepresentAs Int64 for Doubles, but this is a cast not a conversion. I need essentially a cast AND a conversion in both serialization directions.
I wish I could just give the default serializer a custom serializer to override one of it's own, but that would mean a dirty hack.
Am I missing a really easy way?
You can definitely do this, you just have to get the timing right. When the driver starts up there are no serializers registered. When it needs a serializer, it looks it up in the dictionary where it keeps track of the serializers it knows about (i.e. the ones that have been registered). Only it it can't find one in the dictionary does it start figuring out where to get one (including calling the serialization providers) and if it finds one it registers it.
The limitation in RegisterSerializer is there so that you can't replace an existing serializer that has already been used. But that doesn't mean you can't register your own if you do it early enough.
However, keep in mind that registering a serializer is a global operation, so if you register a custom serializer for double it will be used for all doubles, which could lead to unexpected results!
Anyway, you could write the custom serializer something like this:
public class CustomDoubleSerializer : BsonBaseSerializer
{
public override object Deserialize(BsonReader bsonReader, Type nominalType, Type actualType, IBsonSerializationOptions options)
{
var rep = bsonReader.ReadInt64();
return rep / 100.0;
}
public override void Serialize(BsonWriter bsonWriter, Type nominalType, object value, IBsonSerializationOptions options)
{
var rep = (long)((double)value * 100);
bsonWriter.WriteInt64(rep);
}
}
And register it like this:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
You could test it using the following class:
public class C
{
public int Id;
public double X;
}
and this code:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
var c = new C { Id = 1, X = 29.99 };
var json = c.ToJson();
Console.WriteLine(json);
var r = BsonSerializer.Deserialize<C>(json);
Console.WriteLine(r.X);
You can also use your own serialization provider to tell Mongo which serializer to use for certain types, which I ended up doing to mitigate some of the timing issues mentioned when trying to override existing serializers. Here's an example of a serialisation provider that overrides how to serialize decimals:
public class CustomSerializationProvider : IBsonSerializationProvider
{
public IBsonSerializer GetSerializer(Type type)
{
if (type == typeof(decimal)) return new DecimalSerializer(BsonType.Decimal128);
return null; // falls back to Mongo defaults
}
}
If you return null from your custom serialization provider, it will fall back to using Mongo's default serialization provider.
Once you've written your provider, you just need to register it:
BsonSerializer.RegisterSerializationProvider(new CustomSerializationProvider());
I looked through the latest iteration of the driver's code and checked if there's some sort of backdoor to set custom serializers. I am afraid there's none; you should open an issue in the project's bug tracker if you think this needs to be looked at for future iterations of the driver (https://jira.mongodb.org/).
Personally, I'd open a ticket -- and if a quick workaround is necessary or required, I'd subclass DoubleSerializer, implement the new behavior, and then use Reflection to inject it into either MongoDB.Bson.Serialization.Serializers.DoubleSerializer.__instance or MongoDB.Bson.Serialization.BsonDefaultSerializationProvider.__serializers.

Getting DataContext error while saving form

I get this error when opening one specific form. The rest is working fine and I have no clue why this one isn't.
Error: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported.
I get the error at _oDBConnection when I try to save. When I watch _oDBConnection while running through the code, it does not exist. Even when I open the main-window it does not exist. So this form is where the DataContext is built for the very first time.
Every class inherits from clsBase where the DataContext is built.
My collegue is the professional one who built it all. I am just expanding and using it (learned it by doing it). But now I'm stuck and he is on holiday. So keep it simple :-)
What can it be?
clsPermanency
namespace Reservation
{
class clsPermanency : clsBase
{
private tblPermanency _oPermanency;
public tblPermanency PermanencyData
{
get { return _oPermanency; }
set { _oPermanency = value; }
}
public clsPermanency()
: base()
{
_oPermanency = new tblPermanency();
}
public clsPermanency(int iID)
: this()
{
_oPermanency = (from oPermanencyData in _oDBConnection.tblPermanencies
where oPermanencyData.ID == iID
select oPermanencyData).First();
if (_oPermanency == null)
throw new Exception("Permanentie niet gevonden");
}
public void save()
{
if (_oPermanency.ID == 0)
{
_oDBConnection.tblPermanencies.InsertOnSubmit(_oPermanency);
}
_oDBConnection.SubmitChanges();
}
}
}
clsBase
public class clsBase
{
protected DBReservationDataContext _oDBConnection;
protected int _iID;
public int ID
{
get { return _iID; }
}
public DBReservationDataContext DBConnection
{
get { return _oDBConnection; }
}
public clsBase()
{
_oDBConnection = new DBReservationDataContext();
}
}
Not a direct answer, but this is really bad design, sorry.
Issues:
One context instance per class instance. Pretty incredible. How are you going to manage units of work and transactions? And what about memory consumption and performance?
Indirection: every entity instance (prefixed o) is wrapped in a cls class. What a hassle to make classes cooperate, if necessary, or to access their properties.
DRY: far from it. Does each clsBase derivative have the same methods as clsPermanency?
Constructors: you always have to call the base constructor. The constructor with int iID always causes a redundant new object to be created, which will certainly be a noticeable performance hit when dealing with larger numbers. A minor change in constructor logic may cause the sequence of constructor invocations to change. (Nested and inherited constructors are always tricky).
Exception handling: you need a try-catch everywhere where classes are created. (BTW: First() will throw its own exception if the record is not there).
Finally, not a real issue, but class and variable name prefixes are sooo 19xx.
What to do?
I don't think you can change your colleague's design in his absence. But I'd really talk to him about it in due time. Just study some linq-to-sql examples out there to pick up some regular patterns.
The exception indicates that somewhere between fetching the _oPermanency instance (in the Id-d constructor) and saving it a new _oDBConnection is created. The code as shown does not reveal how this could happen, but I assume there is more code than this. When you debug and check GetHashCode() of _oDBConnection instances you should be able to find where it happens.

Resources