Event versioning in CQRS - events

We are at a point in our development cycle (asp.net mvc applciation), where we need to introduce changes to our existing commands and events (say adding/removing a few properties etc).
I have been trying to find a way to introduce commands/events versioning in the system. I have read many posts on google/stackoverflow etc but am still to see an example of code that implements it. Is there a recommended pattern one should follow when versioning. If yes any examples/snippets?
Edit: This is how far i have gotten with this
i have versioned my events backwards, such that the latest will always be called the same, while the ones that go obsolete will have a suffix added to it like '_V1', '_V2' etc.
So if i have an event
public class OrderSubmittedEvent : IDomainEvent
{
public int OrderId { get; private set; }
public OrderSubmittedEvent(int orderId)
{
OrderId = orderId;
}
}
and if i have to add a few properties i rename my event above to
public class OrderSubmittedEvent_V1 : IDomainEvent
{
public int OrderId { get; private set; }
public OrderSubmittedEvent_V1(int orderId)
{
OrderId = orderId;
}
}
and introduce another event with the same name as my original event but with added properties, like so
public class OrderSubmittedEvent : IDomainEvent
{
public int OrderId { get; private set; }
public OrderSubmittedEvent(int version = 1, int orderId = 0, string customerName =
"Joe blogs", string address = "Earth")
{
OrderId = orderId;
CustomerName = customerName;
Address = address;
CurrentVersion = version;
}
public static int LatestVersion
{
get { return 2; }
}
public int CurrentVersion { get; set; }
public string CustomerName { get; set; }
public string Address { get; set; }
}
i still have to go ahead and change my code which publishes this event to include values for new properties.
any given point of time when i get all my events from the event store (say, for replaying) they will always be of the same type after deserialization (in this case OrderSubmittedEvent) with new properties which were not part of the old events populated with their default values.
At the time of replaying my events i make my events go through an IEventUpgrader
This first verifies if the events is the latest version available. since the type will always be the event type, this check is based on the properties "LatestVersion" and "CurrentVersion"
what does everyone think of this approach?
next todo
If event is an old version publish an 'UpdateMYEVENT' Event
thanks

usually you only need to version the events, you can ignore the commands since you don't store them in the event store.
There are few ways to implement versioning.. my method is quite simple:
[Obsolete]
public class CompanyCreated
{
public Guid Id { get; set; }
public string Name { get; set; }
}
public class CompanyCreated_V2
{
public Guid Id { get; set; }
public string CompanyName { get; set; }
public string TaxNumber { get; set; }
}
You need to handle conversion of events from the old one to the new one as you read the events from the event store.
also, you need to be aware that you never remove any old event classes, hence why I decorate them as Obsolete, to let other developers know not to use the event.

If you are only adding & removing properties, there might be no need to version events; just ignore the serialized properties that are removed, and use sensible defaults for the ones you add.

I would be cautious with mixing events and commands. They have different purposes and solve different problems.
To give a better feeling of what I mean, think of it like so
Commands are more like RESTful API, client-server communication.
While Event Sourcing is more of a way to store the data.
Both need versioning as a way to provide backward compatibility through immutability, but once again for different reasons. Hence implementation and exceptions are different.
I would definitely recommend a book Event Versioning by Greg Young to get more insides into versioning for event sourced systems..
For more information on the commanding, check out the CQRS series and particularly CQRS via HTTP.

Admittedly I have not had the opportunity to try the following but I'd like go bake in the versioning from day one:
Since the full type name is relevant I would go for namespaces.
namespace Primary.Messages.V1
{
public class CompanyCreated
{
public Guid Id { get; set; }
public string Name { get; set; }
}
}
namespace Primary.Messages.V2
{
public class CompanyCreated
{
public Guid Id { get; set; }
public string Name { get; set; }
public string TaxNumber { get; set; }
}
}
These could be in different assemblies and you could mark the older ones as obsolete (as suggested by Sarmaad). It may be that older version are not necessarily obsolete though.
Any ideas?

I am totally out of reasons while considering why would one need event-versioning the way it has been asked in question and more specifically the way it has been suggested in the answers?
I can think of only two use cases
1- the event class currently being used is deprecated and no more needed.
Then that class can be tracked down in the git anytime needed. So why bother and complicate the active code by keeping the dead classes?
2- The business requirement is changed and now you need to keep the base event but you also need another similar event with some parameter differences.
That can be solved in a number of ways, like decorator pattern can help to handle such variations to a great extent
Alternately the new event might be representing a unique domain concept and instead of trying to force the concept into existing model, it might be better to name it more semantically and use it that way.

Related

Xamarin: why need the BusinessEntityBase class

I want to learn Xamarin and i took a look at a sample project called 'Tasky'
But i don't understand why theres a BusinessEntityBase class...
A task also needs it's ID to be PK and incremented so why doesn't it implement
the BusinessEntityBase class instead of the IBusinessEntity interface?
public class Task : IBusinessEntity
{
public Task ()
{
}
[PrimaryKey, AutoIncrement]
public int ID { get; set; }
public string Name { get; set; }
public string Notes { get; set; }
// new property
public bool Done { get; set; }
}
public abstract class BusinessEntityBase : IBusinessEntity
{
public BusinessEntityBase ()
{
}
/// <summary>
/// Gets or sets the Database ID.
/// </summary>
[PrimaryKey, AutoIncrement]
public int ID { get; set; }
}
public interface IBusinessEntity
{
int ID { get; set; }
}
IBusinessEntity interface is just that - an interface, the properties of which (in this case, ID) would be a part of every business entity. Read up on the use of Interfaces in C# to get a better understanding of why this is done.
Another example would be:
Lets say you have an employee management application which contains three different kinds of users - Manager, Developer, Tester. You have a class for each of these.
It is very likely that all three of them contain an ID field, a first name and a last name.
Instead of adding the same properties to each of their classes, you create an interface called IEmployee, which has three fields - ID, FirstName and LastName and get each of the three classes to implement it.
Functionally, implementing the properties on an interface and adding them manually has the same effect on the class. Although, having an interface connecting all three of them, you now have a more abstract way to access your data (For example, to count the number of employees, you could check the count of IEmployee objects, rather than counting all three separate and then adding the numbers up).
TL;DR Doing it this way is not mandatory. In this scenario you could simply have a BusinessEntity class that has an ID field. It is simply a good practice and makes your applications easy/possible to maintain when they grow.

EF, POCO, DB First... how to do Business Logic in Property "set"?

OK, so I've been building my first large(ish) EF 4.1 POCO + MVC application. It's a replacement of a legacy system so I 'm using an existing database.
I've generated my POCO classes using DbContext T4 generation. I've got some really nice forms going on and some really nice validation happening with a lot of sexy generics in my MVC classes to cut down on boiler-plate code... All's good.
Suddenly I realized that the most sensible thing (to me) would be for some of business logic to be in the "set" of some of the properties of my POCO objects.
E.g. Suppose the following class was generated by the T4;
public partial class SalesOrderLine
{
public int ID { get; set; }
public int SalesOrderID { get; set; }
public int ProductID { get; set; }
public decimal UnitPrice { get; set; }
public int Quantity { get; set; }
public decimal ExtendedPrice { get; set; }
public virtual Product Product { get; set; }
public virtual SalesOrder SalesOrder { get; set; }
}
Ignore for a moment the obvious argument that the calculated field "ExtendedPrice" shouldn't even be stored in the database, and just come along with me for the ride...
...then, it seems to me, logically, if this object is really supposed to represent a Sales Order Line, that I should be able to construct my object such that the following unit test will work:
SalesOrderLine sol = new SalesOrderLine();
sol.UnitPrice = 100;
sol.Quantity = 5;
Assert.IsEqual(sol.ExtendedPrice, 500);
...obviously I can't do that as long as I want the base POCO to be generated by the T4. It seems to me I have several options:
Set the generated code file's properties "do not compile", copy and paste the generated code into another file and modify the "set" to do the business logic of setting the extended price when the UnitPrice or Quantity is set. The downside here is that the logic will be run whenever an object is loaded from the database (since the EF will set the public properties and not my private fields). Additionally, this object will then need to be maintained manually for the rest of the life of the project when database changes occur.
Create an UpdateTotals function that gets called in the Validate routine that I have for my object, which gets called by the SaveChanges() on the DbContext. Obviously, the above Unit Test above would not work in that case. The system, and my integration tests however would work and would only call the code when a change was done to the object.
Decide that I'm asking the wrong question, and that I should really add methods to the object called "SetPrice" and "SetQuantity", and then qualify the set accessors of the UnitPrice and Quantity to be "internal". The downside here is that MVC will try and update the model from the form and won't be able to set those properties.
Some solution that involves downloading two or three more frameworks that create even more levels of abstraction than I already have... A repository pattern, or "use NHibernate" or something like that... You can suggest this, but I'm growing weary of how much work it is to set things up to do it the "academically correct" way. For this project, I'd rather meet halfway on the long-term-maintainability vs. speed-of-development spectrum and not over-complicate my project with a ton of extra tools and dlls... ...but I'll try an keep an open mind :)
--- EDIT: another idea ---
[5.] Another thought, since the fields are always simply calculated there should really be no need to ever set them - either from the database or otherwise. Therefore, something like this might work:
public decimal ExtendedAmount
{
get { return UnitPrice * Quantity; }
internal set { }
}
...my thought is that the EF instantiation would attempt to call the "set", but the set would do nothing, then, when the object was saved or checked for changes it would call the 'get' and that would return the calculated value and that value would get stored in the DB. The only downside here is when you were trying to use the object model to validate the database when the database had in incorrect value stored in the ExtendedAmount field. It's a little hokie, I know, but I thought it would be an interesting trick... in fact the "set" could perhaps throw an exception if (value != UnitPrice * Quantity)
--- END EDIT ---
I'm curious to hear what other have done in these kinds of cases, as I'm sure it's common. Seems like a lot of the tutorials take you as far as "generating POCO classes from the database", and then leave the rest of the project development up to you.
Cheers,
Chris
A couple ideas:
Why not use Code First? That way, you can put business logic (e.g., calculated properties) right in your entity class.
Example
public partial class SalesOrderLine
{
public int ID { get; set; }
public int SalesOrderID { get; set; }
public int ProductID { get; set; }
private decimal _unitPrice;
public decimal UnitPrice
{
get { return _unitPrice; }
set
{
if (value == _unitPrice) return;
_unitPrice = value;
CalculateExtendedPrice();
}
}
private decimal _quantity;
public decimal Quantity
{
get { return _quantity; }
set
{
if (value == _quantity) return;
_quantity= value;
CalculateExtendedPrice();
}
}
public decimal ExtendedPrice { get; set; }
public virtual Product Product { get; set; }
public virtual SalesOrder SalesOrder { get; set; }
private void CalculateExtendedPrice()
{
ExtendedPrice = UnitPrice * Quantity;
}
}
If Code First is not an option, what about making your entity a partial class (if it is not already) and putting your business logic in a separate code file (but with the same class name). This way, your main code file will get overwritten when you generate, but your secondary code file will remain. This is the usual way to deal with generated code.

Serializing EF4.1 Entities using JSON.Net

I am building an application using MVC3, Razor view engine, Repository Pattern with Unit of Work and using EF4.1 Code First to define my data model.
Here is a bit of background (gloss over it if you want).
The application itself is just an Intranet 'Menu'.
The 2 main entities are MenuItem and Department of which:
MenuItem can have many Departments
Departments can have many MenuItems
MenuItem may have a MenuItem as a parent
This is how I have defined my Entities
public class MenuItem
{
public int MenuItemId { get; set; }
public string Name { get; set; }
public string Url { get; set; }
public virtual ICollection<Department> Departments { get; set; }
public int? ParentId { get; set; }
public virtual MenuItem ParentMenuItem { get; set; }
}
public class Department
{
public int DepartmentId { get; set; }
public string Name { get; set; }
public virtual ICollection<MenuItem> MenuItems { get; set; }
}
I am using the FluentAPI to define the Self Reference Many-to-Many for the MenuItem.
The issue I am having is passing a MenuItem to the view via JSON.
The central issues are that I have a circular reference between my entities that the built in JSON parser can't deal with and I have lazy loading and proxy generation still enabled.
I am using JSON.net library from Nuget as my JSON Serializer as this seems to be a nice way round the circular reference issue. I now am unsure how to 'fix' the proxy generation issue. Currently the serializer throws The RelationshipManager object could not be serialized. This type of object cannot be serialized when the RelationshipManager belongs to an entity object that does not implement IEntityWithRelationships.
Can anyone help me with this? If I turn off proxy generation, I am going to have a hell of a time loading all of the MenuItem children so I am keen leave this on. I have read a fair amount and there seems to be a variety of different answers including projecting the entities into another object and serialize that, etc, etc. Ideally there would be some way of configuring JSON.net to ignore the RelationshipManager object?
Update
Here is what I have used as a Custom ContractResolver for JSON.Net serializer. This seems to have sorted out my issue.
public class ContractResolver : DefaultContractResolver
{
private static readonly IEnumerable<Type> Types = GetEntityTypes();
private static IEnumerable<Type> GetEntityTypes()
{
var assembly = Assembly.GetAssembly(typeof (IEntity));
var types = assembly.GetTypes().Where(t => String.Equals(t.Namespace, "Namespace", StringComparison.Ordinal));
return types;
}
protected override List<MemberInfo> GetSerializableMembers(Type objectType)
{
if (!AllowType(objectType))
return new List<MemberInfo>();
var members = base.GetSerializableMembers(objectType);
members.RemoveAll(memberInfo => (IsMemberEntityWrapper(memberInfo)));
return members;
}
private static bool AllowType(Type objectType)
{
return Types.Contains(objectType) || Types.Contains(objectType.BaseType);
}
private static bool IsMemberEntityWrapper(MemberInfo memberInfo)
{
return memberInfo.Name == "_entityWrapper";
}
}
IEntity is an interface all my Code First entity objects implement.
I realise this question has an accepted answer, but I thought I would post my EF Code First solution for future viewers. I was able to get around the error message with the contract resolver below:
class ContractResolver : DefaultContractResolver
{
protected override List<System.Reflection.MemberInfo> GetSerializableMembers(Type objectType)
{
if (objectType.Namespace.StartsWith("System.Data.Entity.Dynamic"))
{
return base.GetSerializableMembers(objectType.BaseType);
}
return base.GetSerializableMembers(objectType);
}
}
This works because EF Code First classes inherit from the POCO class that you actually want serialized, so if we can identify when we are looking at an EF generated class (by checking the namespace) we are able to just serialize using the properties from the base class, and therefore only serialize the POCO properties that we were really after in the first place.
Well, you used powerful serialization API which serializes references and all members as well and now you complains that it serializes all members :)
I didn't test it but I believe this will bring you close to the solution.
JSON.NET is quite powerful tool and it should offer you the extensibility point to avoid this behavior but you will have to code it yourselves. You will need custom DataContractResolver where you define which members should be serialized. Here is the similar example for NHibernate.
You can implement some logic which will take only members present in the parent class of dynamic proxy. I hope this will not break lazy loading. To validate that current entity is proxy you can use this code to get all known proxy types:
IEnumerable<Type> types = ((IObjectContextAdapter)dbContext).ObjectContext.GetKnownProxyTypes();

What model structure should I use for tracking changes?

I have a data model that requires tracking changes. I could have as many ~100,000 changes/updates to my model per month. My model involves tracking HOW a task is completed and can be broken down into 3 basic types.
I currently have my model like this but have divided the types of sandwiches into 3 separate controllers because each sandwich is made very differently:
public class Sandwich
{
public int Id { get; set; }
public int SandwichTypeId { get; set; } //This is an enum type
//About a dozen other properties that define HOW the sandwich gets made
}
I could break it apart like this and match it more to my controllers:
public class PeanutButterAndJellySandwich
{
public int Id { get; set; }
//No enum sandwich type
//About a dozen other properties that define HOW the sandwich gets made
}
public class HamSandwich
{
public int Id { get; set; }
//No enum sandwich type
//About a dozen other properties that define HOW the sandwich gets made
}
//etc
2 Part Question:
Is there any advantage(s) to breaking up the model?
If so, would those advantages be defeated because I would have to add separate tracking tables as well?
Thanks.
In EF I have done something like subclassing the Sandwich class, and using those in the specific controllers.
On the other hand, I've handled things like this by, for example, creating just one more field:
public class Sandwich
{
public int? CurrentVersion { get; set; }
public int Id { get; set; }
public int SandwichTypeId { get; set; } //This is an enum type
//About a dozen other properties that define HOW the sandwich gets made
}
This way, a single sandwich can have a lot of previous versions, all of which would point to the current one. In my update routine, I created a duplicate (with the old version's CurrentVersion pointing to the original, now updated, version Id).
This of course requires you to change other places where you list Sandwiches to look only for those which are not revisions.
If you need to reference immediately previous or next versions then you could create int? PreviousVersion and/or int? NextVersion to avoid searches in your database.

Metadatatypes with self-validation using validation application block

Is it possible to use the selfvalidation attribute with my validations located in a metadatatype? I'm using Enterprise Library 5's Validation Application Block.
As I explained in my other answer, this isn't supported out of the box. However, this can be achieved by hooking into the framework using depedency injection and replace the existing AttributeValidatorFactory implementation. I written a post on my weblog on how to do this: Mixing Validation Application Block With DataAnnotation: What About SelfValidation?
I hope this helps.
This is currently not supported (out of the box) by VAB. Look for instance at this thread at the EntLib forum. I think the main reason this is not supported is because you can't simply place the [SelfValidation] method on the meta data type and expect this to work. Reason it won't work is because self validation methods will typically validate instance members of the type. The signature of the self validation method does not contain the actual object to validate.
A simple work around is call into the meta data type from the entity. For instance:
[MetadataType(typeof(InvoiceMetaData))]
[HasSelfValidation]
public partial class Invoice
{
public string Name{ get; set; }
public int Price { get; set; }
[SelfValidation]
public void CustomValidate(ValidationResults results)
{
// Call into the meta data class
InvoiceMetaData.Validate(this, results);
}
}
public class InvoiceMetaData
{
[StringLengthValidator(1, 10, Tag = "Name")]
string Name { get; set; }
[RangeValidator(0, RangeBoundaryType.Inclusive, 0,
RangeBoundaryType.Ignore, Tag = "Price")]
int Price { get; set; }
public static void CustomValidate(Invoice instance,
ValidationResults results)
{
results.AddResult(new ValidationResult("ErrorMessage1",
instance, "", "", null));
}
}
This of course isn't a very clean solution. VAB however is very extendable and version 5.0 only got better. If you want you can swap existing AttributeValidationFactory and replace it with a version that is able to do this. It won't be easy though.
Cheers

Resources