How does protocol buffer handle versioning? - protocol-buffers

How does protocol buffers handle type versioning?
For example, when I need to change a type definition over time? Like adding and removing fields.

Google designed protobuf to be pretty forgiving with versioning:
unexpected data is either stored as "extensions" (making it round-trip safe), or silently dropped, depending on the implementation
new fields are generally added as "optional", meaning that old data can be loaded successfully
however:
do not renumber fields - that would break existing data
you should not normally change the way any given field is stored (i.e. from a fixed-with 32-bit int to a "varint")
Generally speaking, though - it will just work, and you don't need to worry much about versioning.

I know this is an old question, but I ran into this recently. The way I got around it is using facades, and run-time decisions to serialize. This way I can deprecate/upgrade a field into a new type, with old and new messages handling it gracefully.
I am using Marc Gravell's protobuf.net (v2.3.5), and C#, but the theory of facades would work for any language and Google's original protobuf implementation.
My old class had a Timestamp of DateTime which I wanted to change to include the "Kind" (a .NET anachronism). Adding this effectively meant it serialized to 9 bytes instead of 8, which would be a breaking serialization change!
[ProtoMember(3, Name = "Timestamp")]
public DateTime Timestamp { get; set; }
A fundamental of protobuf is NEVER to change the proto ids! I wanted to read old serialized binaries, which meant "3" was here to stay.
So,
I renamed the old property and made it private (yes, it can still deserialize through reflection magic), but my API no longer shows it useable!
[ProtoMember(3, Name = "Timestamp-v1")]
private DateTime __Timestamp_v1 = DateTime.MinValue;
I created a new Timestamp property, with a new proto id, and included the DateTime.Kind
[ProtoMember(30002, Name = "Timestamp", DataFormat = ProtoBuf.DataFormat.WellKnown)]
public DateTime Timestamp { get; set; }
I added a "AfterDeserialization" method to update our new time, in the case of old messages
[ProtoAfterDeserialization]
private void AfterDeserialization()
{
//V2 Timestamp includes a "kind" - we will stop using __Timestamp - so keep it up to date
if (__Timestamp_v1 != DateTime.MinValue)
{
//Assume the timestamp was in UTC - as it was...
Timestamp = new DateTime(__Timestamp_v1.Ticks, DateTimeKind.Utc) //This is for old messages - we'll update our V2 timestamp...
}
}
Now, I have the old and new messages serializing/deserializing correctly, and my Timestamp now includes DateTime.Kind! Nothing broken.
However, this does mean that BOTH fields will be in all new messages going forward. So the final touch is to use a run-time serialization decision to exclude the old Timestamp (note this won't work if it was using protobuf's required attribute!!!)
bool ShouldSerialize__Timestamp_v1()
{
return __Timestamp_v1 != DateTime.MinValue;
}
And thats it. I have a nice unit test which does it from end-to-end if anyone wants it...
I know my method relies on .NET magic, but I reckon the concept could be translated to other languages....

Related

Get notified when TaggedValue of an element changes in Enterprise Architect

I am new in creating Addins for Enterprise Architect and I have this problem:
I have a diagram with elements which have TaggedValues. I want to get notified when the value of a TaggedValue changes and see the new value.
I saw that there is this event EA_OnElementTagEdit available but I can't seem to get it triggered. I also saw that the tagged value has to be of type AddinBroadcast but I can't seem to make it work. What am I missing?
I will put below a sample of my code:
//creating tagged value
EA.TaggedValue ob3 = (EA.TaggedValue)NewElement.TaggedValues.AddNew("Responsible", "val");
ob3.Value = EEPROMBlocks.ElementAt(index).Responsible;
ob3.SetAttribute("Type", "AddinBroadcast");
ob3.Update();
//event method
public override void EA_OnElementTagEdit(EA.Repository Repository, long ObjectID, ref string TagName, ref string TagValue, ref string TagNotes)
You are not missing anything. This is simply not possible. The only way around is the OnContext... where you temporarily store the status of one element and look if a tag has changed when the context changes. I would not recommend that since it involved a lot of superfluous DB accesses.
Send a feature request (if you are an optimistic guy). Alternatively you should think of ways to get around this somehow else.

How to ensure data integrity with domain that change

I'm working on a project where I applied DDD principles.
In order to ensure domain integrity I validate each domain model (entities or value objects) on creation.
Example of the user entity:
class User {
constructor(opts) {
this.email = opts.email;
this.password = opts.password;
this.validate();
}
validate() {
if(typeof this.email !== 'string') {
throw new Error('email is invalid');
}
if(typeof this.password !== 'string') {
throw new Error('password is invalid');
}
}
}
The validate method is stupid implementation of validation (I know I should verify email using Regex and I handle the error in a most effective way).
This model is then persisted using the the userRepository module.
Now, imagine I want to add a new property username to my user model, my validate method will look like this:
validate() {
if(typeof this.email !== 'string') {
throw new Error('email is invalid');
}
if(typeof this.password !== 'string') {
throw new Error('password is invalid');
}
if(typeof this.username !== 'string') {
throw new Error('username is invalid');
}
}
The problem is that old user models stored will not have the username property which is now required. Therefore when I'll fetch data from database and try to construct model it'll throw an error.
To fix this problem I see multiple solutions (but none seems good to me):
create an anti-corruption layer in the user repository (create default username if not defined)
Allow invariant in my domain model (username is not required)
Use cron-services that update database entities based on the domain change (again set default username)
The problem is that old user models stored will not have the username property which is now required.
Yup, that's a problem.
Here's how I think about it -- the persisted copy of your domain model is a message, sent by an instance of your domain model running in the past to an instance of your domain model running in the future.
If you want those messages to be compatible, then you need to accept certain constraints in the design of your message schema.
One of those constraints is that you don't add new required fields to an existing message type.
Adding optional fields is fine, because systems that don't care can ignore the optional fields, and the systems that do care can provide a default value of when the field is missing.
But if you need to add a new required field, then you create a new message.
The event sourcing community has to worry about this sort of thing a lot (events are messages); Greg Young wrote Versioning in an Event Sourced System, which has good lessons on the versioning of messages.
To fix this problem I see multiple solutions (but none seems good to me)
I agree, these are all kind of lousy - in the sense that they are all introducing a mechanism for deriving a "default" user name where none exists. That being the case, the field is effectively optional; so why claim that it is required?
In a situation where the field isn't required, but you want to stop accepting new data that doesn't include this field -- you probably want to put new validation on the data input code path. That is to say, you can create a new API with messages that require the field, validate those messages, and then use the domain model with the optional field to store and fetch the data.
So adding a new required field is an anti-pattern in DDD
Adding new required fields is an anti-pattern in messaging; DDD has little to do with it.
You shouldn't be expecting to be able to add required fields to existing schema in a backwards compatible way. Instead, you extend the message schema by introducing a new message in which the field is required.
I thought applying DDD principles help to handle the business logic complexity and also help to design evoluting software and evoluting domain models
It does, but it isn't magic. If your new models aren't backward compatible with the old models, then you are going to have to manage that change in some way
You might declare bankruptcy, and simply forget all previous history.
You might migrate your existing data to the new data model.
You might maintain the two different data models in parallel.
In other words, backwards compatibility is a long term concern that you should be thinking about as you design your solution.

ASP MVC3 keeping variable value through page refresh

Yo! As the topic says, I need to keep var's value through refresh. Thing is it's SessionKey. Other thing is it's generated automatically.
What I need to do is html <select> which won't lose data on refresh. Actually there're 2 <select>s which are filled programatically and you can pass data between them in real time. Then if I press save and page fails to validate these <select>s return to their original state. I already have it fixed, by keeping data in session and if it has certain key, <select>s are filled with correct data.
Why would I need automatically generated key? Well multi-tab working. If user would try to add 2+ new records to database at the same time (which is extreme, but possible), he needs to have that data kept under different keys so app can find desired stuff.
I could as well make client side validation, but... nope, just nope, too much work.
As for code, anything useful:
public ActionResult MethodUsedAfterPageLoad
{
...
Guid stronyGuid = Guid.NewGuid();
ViewData["strony"] = stronyGuid.ToString();
...
}
This way every refresh creates new Guid, but Guid is used as SessionKey!
If I do it following way:
public Class ControllerClass
{
private Guid stronyGuid;
...
}
This will reset variable, that's bad. Using static keyword is bad idea.

Using Oracle 10g CLOB with Grails 2.0.1

I'm working on a project using Oracle 10g and Grails v2.0.1.
I'm trying to use a CLOB data type for a text input field in my Domain class, and it doesn't seem to be working. My first attempt was based on what I read here about GORM, where is says to use type: 'text', like this example:
class Address {
String number
String postCode
static mapping = {
postCode type: 'text'
}
}
Grails mapped that to a LONG data type in my DB, which is not desirable
2nd attempt was to try type: 'clob'. That WAS effective in getting my DB datatype to be CLOB, but resulted in a class cast error because the property itself was defined as a string, i.e. String postCode. (Note that I've never seen type:'clob' in documentation, but I could deduce from the dialect class that clob might be a valid input there)
I subsequently tried defining the property as a java.sql.Clob, i.e. Clob postCode;, and that didn't work at all. No error messages, but nothing was getting persisted to the DB either.
I took a final step of keeping the Clob approach, but using a transient String property in which the getters/setters attempt to map the transient String value to the persistent Clob field. But I cannot figure out how to get my string value into the Clob. Grails doesn't throw an error, but the println() after my attempted assignment never prints. I've tried using myClob.setString(1, theString) to make an assignment, but with no success.
So to make a long story short, I can't seem to use a Clob in my scenario, and I'm wondering if anyone else has seen this and been able to overcome it. If so, can you tell me what I might be doing wrong?
OR... is there a way to override the datatype Grails chooses such that I could force it to map postCode type: 'text' to a CLOB? I'm not proficient with Hibernate, so I'm not sure how to go about that if there's a way.
Side note: prior to our upgrade from Grails 1.3.7 to 2.0.1, I'm pretty sure the type: 'text' did, in fact, map to a CLOB in Oracle. So this might be new to 2.0.1.
I think I found an answer tucked into the documentation on Custom Hibernate Types.
In that case, override the column's SQL type using the sqlType attribute
This appears to be working.
Looks like I'm able to use that to force my DB type to be CLOB while keeping the java type a String. In other words, maybe type chooses both a DB type and a Java type for handling the field? But sqlType gives a little more granularity to specify the DB type to use.
So the sample Domain class above should look like this in my case:
class Address {
String number
String postCode
static mapping = {
postCode sqlType: 'clob'
}
}
I gleaned this from another StackOverflow question on the topic (the question itself clued me in, whereas the accepted answer misled me!):
How do I get Grails to map my String field to a Clob?
I spent a day trying to figure this all out, and it was incredibly frustrating. So maybe my notes on the topic here will help someone else avoid that experience!
And while I'm keeping notes here... this post proved somewhat useful in terms of troubleshooting how to get more specific in my mappings:
http://omaha-seattle.blogspot.com/2010/02/grails-hibernate-custom-data-type.html
Interesting code from that is reproduced here:
//CONFIG.GROOVY (maps a custom SixDecimal type)
grails.gorm.default.mapping = {
'user-type'( type: SixDecimalUserType, class: SixDecimal )
}
Probably too late but I have found a really simple solution to this problem:
class MyDomain{
String myField
...
static mapping = {
myField type:'materialized_clob'
}
}
Nothing else is required, writes and reads works like charm! :D
Hope it helps.
Ivan.
Ivan's answer works for me. MaterializedClob is the Hibernate JDBC type for a Java String.

Spring JSON merge

I'm updating a record from a form over AJAX. I have a JSON object that maps to my entity, and my controller method is:
#RequestMapping(value = "/vendors", method = RequestMethod.POST)
public ResponseEntity<String> saveVendorsJson(#RequestParam String vendor) {
Vendor v = Vendor.fromJsonToVendor(vendor);
if (v.merge() == null) {
v.persist();
}
return new ResponseEntity<String>(HttpStatus.OK);
}
I expected from the documentation that v.merge() will return null if it didn't find an existing record by the object's 'id' field to merge with, and in that case I want to persist it as a new Vendor object.
What's happening is, despite my JSON having an 'id' field value that matches an existing record, I'm ALWAYS inserting a new record with my updated goods from the browser.
I'm aware I'm having the POST method pull double-duty here, which isn't strictly RESTful. In theory, this is simpler for me (though of course that's turning out not to be the case).
I believe this is a Hibernate thing. Hibernate will not "merge" if it doesn't know it already exists. I think what I have done in the past is to do a lookup, then a persist. I think if you try to just merge something coming in off the wire you would get a Primary Key Collision, or something similar. I believe Hibernate has some sort of "dirty" flag internal to indicate if you are creating or editing an existing object.
There also used to be a way in raw-Hibernate to do a soft-lookup, basically tell Hibernate "look, I have this object, I don't want to do a SELECT blah-blah-blah, I just want to update some fields". It would load the object into Cache and allow you to do the update without doing the SELECT first. There is also an updateOrSave() in Spring, but that actually does the SELECT first.

Resources