I need to track the changes for an entity including oneToMany relations. I've tried Stof Extension Loggable, but I think it doesn't track changes to a linked entity, only deletions or additions.
For ex: if my entity Member can have several Address, and if my field adresses is versioned, the extension will tell me if an address has been removed or added, but it will not track a change in the address.
I thought SimpleThings/EntityAudit might do that, but it doesn't and it's not compatible yet with Symfony 3.
What would be the best way to implement such a system myself ? With Loggable extension, how could my entity Member be aware of changes that happened to an Address?
Related
I want to add selection field in a model which should be array of references. If I add this to model selection: types.array(types.reference(Todo)) then I have some undesirable side-effects like selection is being saved/loaded in snapshots and also changes to selection are recorded to the undo/redo history when using UndoManager middleware. If I put selection in volatile properties as just plain array then I lose reference sync capabilities(ie if one of selected elements removed from model selection will not be updated automatically).
Is there an approach which would allow to get benefits of both? Is there a way to ignore model field in patches/snapshots without moving it to volatile?
Good approach for models is to only have fields that belong to this entity and that you need in snapshots to send to server or elsewhere. Otherwise models become confusing and hard to manage.
Usually in such cases I put property like that into separate store or sub-store which bound to particular page/view for example. So this is structural issue more then anything else in my opinion.
In the v3 api for building LUIS apps I notice an emphasis on Machined learned entities. When working with them I notice something that concerns me and I was hoping to get more insight into the matter.
The idea is that when using a machined learned entity you can bind it to descriptors of phrase lists or other entities or list entities as a constraint on that machined learned entity. Why not just aim to extract the list entity by itself? What is the purpose of wrapping it in a machined learnt object?
I ask this because I have always had great success with lists. It very controllable albeit you need to watch for spelling mistakes and variations to assure accuracy. However, when I use machined learnt entities I notice you have to be more careful with word order. If there is a variation it could not pick up that machined learnt entity.
Now training would fix this but in reality if I know I have the intent I want and I just need entities from that what really does the machine learnt entity provide?
It seems you need to be more careful with it.
Now I say this with this suspicion. Would the answer lie in the fact that a machine learnt entity would increase intent detection where a list entity would only serve to increase entity detection. If that is the answer that most fits I think I can see the solution to what it is I am looking for.
EDITED:
I haven't been keeping up with LUIS ever since I went on maternity leave, and lo and behold, it's moving from V2 to V3!
The following shows an email conversation from a writer of the LUIS team's documentation.
LUIS is moving away from different types of entities toward a single ML entity to encapsulate a concept. The ML entity can have children which are ML entities themselves. An ML entity can have a feature directly connected to it, instead of acting as a global feature.
This feature can be a phrase list, or it can be another model such as a prebuilt entity, reg ex entity, or list entity.
So a year ago a customer might have built a composite entity and thrown features into the app. Now they should create an ML entity with children, and these children should have features.
Now (before //MS Build Conference) any non-phrase-list feature can be a constraint (required) so a child entity with a constrained regex entity won’t fire until the regex matches.
At/after //Build, this concept has been reworked in the UI to be a required feature – same idea but different terminology.
...
This is about understanding a whole concept that has parts, so an address is a typical example. An address has street number, street name, street type (street/court/blvd), city, state/province, country, postal code.
Each of the subparts is a feature (strong indicator) that an address is in the utterance.
If you used a list entity but not as a required feature to the address, yes it would trigger, but that wouldn’t help the address entity which is what you are really trying to get.
If however, you really just want to match a list, go head. But when the customer says the app isn’t predicting as well as they thought, the team will come back to this concept of the ML entity parent and its parts and suggest changes to the entities.
MS recommends Pascal-case for the Schema Names, but then they don't obey the rule themselves. The custom entities and the primary fields are created by default with all-lowercase schema names, while the custom fields are Pascal-case by default. Even more, the built-in statuscode and statecode for the custom entities are all-lowercase.
Questions:
are the schema names important down the road? There are quite a lot of external integrations coming for our CRM (C#, likely early-bound). For now I'm trying to keep it as clean as possible just to avoid potential future issues, but some colleagues think I'm over-worried and it's not worth the time.
do you know any good reason why MS doesn't obey their own rules in some cases?
I reject the pascal case advice. In my opinion, scheme name should be all lower case. This way it matches to the logical name. It prevents a lot of confusion and mistyped names, in the future.
As you decided to use C# Early bound classes, you will be using crmsvcutil or some Early Bound Generator which will pull all the schema name as it is from CRM Metadata.
If the schema name changes (like on drop & recreate with different datatype) the next class file will fetch it & build error will notify you.
By seeing the revisions nothing is going to change in near future & MS not even worrying about the rule-break.
To mention, next generation web api expects Schema Name in certain things like Navigation Properties whereas if you are going with Late binding, flat system converted lowercase Name (Logical Name) will be used.
In Spring Data Repository interfaces, the following operation is defined:
public T save(T entity);
... and the documentation states that the application should continue working with the returned entity.
I know about the reasoning behind this decision, and it makes sense. I can also see that this works perfectly fine for simple models with independent entities. But given a more complex JPA model with lots of #OneToMany and #ManyToMany connections, the following question arises:
How is the application supposed to use the returned object, when all the rest of the loaded model still references the old one that was passed into save(...)? Also, there might be collections in the application that still contain the old entity. The JVM does not allow to globally "swap" the unsaved entity with the saved one.
So what is the correct usage pattern? Any best practices? I only encountered toy examples so far that do not use #OneToMany or #ManyToMany and thus don't run into this issue. I'm sure that a lot of smart people thought long and hard about this, but I can't see how to use this properly.
This is covered in section 3.2.7.1 of the JPA specification that describes how merge should work. In a nutshell, if the instance being saved is managed (existing), it is simply saved in-place. If not, it is copied to a managed instance (which may not necessarily be a different object since the spec does not mandate that a new instance must be created in this case) and all references from the instance being saved to other managed entities are also updated to refer to the managed instance. This of course requires that the relationships have been correctly defined from the entity being saved.
Indeed, this does not cover the case of storing an entity instance in an unmanaged collection (such as a static collection). That is anyways not advisable because a persisted entity must always be loaded through the persistence provider mechanism (who knows the entity instance may have changed in the persistent store).
Since I have been using JPA for the past many years and have never faced problems, I am confident that the section I have referenced above works well in all scenarios (subject to the JPA provider implementing it as intended). You should try some of the cases that worry you and post separate questions if you run into problems.
How do you apply validation in an MVP/domain environment ?
Let me clearify with an example:
Domain entity:
class Customer
{
string Name;
etc.
}
MVP-model
class CustomerModel
{
string Name;
etc.
}
I want to apply validation on my domain entities but the MVP model has it's own model/class
apart from the domain entity, does that mean I have to copy the validation code
to also work on the MVP-model?
One solution I came up with is to drop the MVP-model and use the domain entity as MVP-Model,
but I don't want to set data to the entities that isn't validated yet.
And second problem that rises is that if the entity has notify-events,
other parts of the application will be affected with faulty data.
A third thing with that approach is if the user edits some data and then cancels the edit, how do I revert to the old values ? (The entity might not come from a DB so reloading the entity is't possible in all cases).
Another solution is to make some sort of copy/clone of the entity in question and use the copy as MVP-model, but then it might get troublesome if the entity has a large object graph.
Anyone has some tips about these problems?
Constraining something like the name of a person probably does not rightfully belong in the domain model, unless in the client's company there is actually a rule that they don't do business with customers whose names exceed 96 characters.
String length and the like are not concerns of the domain -- two different applications employing the same model could have different requirements, depending on the UI, persistence constraints, and use cases.
On the one hand, you want to be sure that your model of a person is complete and accurate, but consider the "real world" person you are modeling. There are no rules about length and no logical corollary to "oops, there was a problem trying to give this person a name." A person just has a name, so I'd argue that it is the responsibility of the presenter to validate what the user enters before populating the domain model, because the format of the data is a concern of the application moreso than the domain.
Furthermore, as Udi Dahan explains in his article, Employing the Domain Model Pattern, we use the domain model pattern to encapsulate rules that are subject to change. That a person should not a have a null name is not a requirement that is likely ever to change.
I might consider using Debug.Assert() in the domain entity just for an added layer of protection through integration and/or manual testing, if I was really concerned about a null name sneaking in, but something like length, again, doesn't belong there.
Don't use your domain entities directly -- keep that presentation layer; you're going to need it. You laid out three very real problems with using entities directly (I think Udi Dahan's article touches on this as well).
Your domain model should not acquiesce to the needs of the application, and soon enough your UI is going to need an event or collection filter that you're just going to have to stick into that entity. Let the presentation layer serve as the adapter instead and each layer will be able to maintain its integrity.
Let me be clear that the domain model does not have to be devoid of validation, but the validation that it contains should be domain-specific. For example, when attempting to give someone a pay raise, there may be a requirement that no raise can be awarded within 6 months of the last one so you'd need to validate the effective date of the raise. This is a business rule, is subject to change, and absolutely belongs in the domain model.