The FHIR schema defines very generic Resources that would sometimes represent several concepts in an implementation and the meaning of the resource is difficult/impossible to determine from the object itself. For example the Condition resource may represent a problem on a problem list, an admission diagnosis, a final diagnosis, a primary diagnosis, etc. The meaning of the Condition depends on know which of these it is. Is there a way to make this explicit in FHIR?
In some cases a Profile StructureDefinition could be used to distinguish between specializations of a Resource, but this doesn't work for things like final vs. admit diagnosis since these contain the same data (i.e. there is no discriminator) and so it is not a general solution.
Profiles never change the meaning of an instance, they simply identify constraints that an instance adheres to. If you stripped out all profile ids from an instance, you could put them back if you had access to all of the eligible profiles and compared them against the instance.
Admission diagnosis or final diagnosis isn't a type of diagnosis, rather it's the relationship of a diagnosis to a particular encounter. A diagnosis that is the final diagnosis of one encounter might be the admission diagnosis for another. A Condition might be included on a problem List whether it's a complaint, symptom, finding or diagnosis. The notion of primary vs. secondary is also a relationship with respect to an encounter/billing. The primary diagnosis for encounter A might be a secondary diagnosis for encounter B.
Look at Encounter.admittingDiagnosis, Encounter.dischargeDiagnosis and Claim.diagnosis.sequence to differentiate admission/discharge and primary.
(It would probably be good to include this guidance on the Condition resource itself, so feel free to submit a change proposal using the link at the bottom of the spec :>)
Related
In a classic example of 2 aggregate roots like:
class Post
{
string AuthorId;
string Title;
string Content;
boolean AllowComments;
...
}
class Comment
{
string AuthorId;
string Content;
DateTime Date;
...
}
When creating new comment, how to ensure that comments are added only to the post that have Post.AllowComments = true?
Having on mind that when user starts writing comment Post.AllowComments could very well be true but in the meantime (while comment is being written) the Post author might change it to false.
Or, even at the time of the submission => when we check Post.AreCommentsAllowed() it could return true but then when we do CommentRepository.Save(comment) it could be false.
Of course, one Post might have many Comments so it might not be practical to have single aggregate where Post have collection of Comments.
Is there any other solution to this?
PS.
I could do db transaction within which i'd check it but i'm looking for a DDD purist solution.
i'm looking for a DDD purist solution.
Basic idea first: if our Comments logic needs information from our Post aggregate, then what we normally do is pass a copy of that information as an argument.
So our application code would, in this case, get a local copy of AllowComments, presumably by retrieving the handle to the Post aggregate root and invoking some query in its interface.
Within the comments aggregate, you would then be able to use that information as necessary (for instance, as an argument to some branching logic)
Race conditions are hard.
A microsecond difference in timing shouldn’t make a difference to core business behaviors. -- Udi Dahan
If data here must be consistent with data there, then the answer is that we have to lock both when we are making our change.
In an application where information is stored locally, that's pretty straight forward, at least on paper: you just need to acquire locks on both objects at the same time, so that the data doesn't change out from under you. There's a bit of care required to ensure that you get deadlocked (aka the dining philosophers problem).
In a distributed system, this can get really nasty.
The usual answers in "modern purist DDD" is that you either relax the consistency requirement (the lock that you are reading is allowed to change while you are working) and you mitigate the inconsistencies elsewhere (see Memories, Guesses, and Apologies by Pat Helland) OR you change your aggregate design so that all of the information is enclosed within the same aggregate (here, that would mean making the comment entities parts of the post aggregate).
Also: creation patterns are weird; you expect that the entity you are intending to create doesn't yet exist in the repository (but off the happy path maybe it does), so that business logic doesn't fit as smoothly into the usual "get the handle from the repository" pattern.
So the conditional logic needs to sneak somewhere else -- maybe into the Post aggregate? maybe you just leave it in the application code? ultimately, somebody is going to have to tell the application code if anything is being saved in the repository.
As far as I can tell, there isn't a broad consensus on how to handle conditional create logic in DDD, just lots of different compromises that might be "good enough" in their local context.
E.g. let’s consider I have the following classes:
Item
ItemProperty which would include objects such as Colour and Size. There's a relation-property of the Item class which lists all of the ItemProperty objects applicable to this Item (i.e. for one item you might need to specify the Colour and for another you might want to specify the Size).
ItemPropertyOption would include objects such as Red, Green (for Colour) and Big, Small (for Size).
Then an Item Object would relate to an ItemProperty, whereas an ItemChoice Object would relate to an ItemPropertyOption (and the ItemProperty which the ItemPropertyOption refers to could be inferred).
The reason for this is so I could then make use of queries much more effectively. i.e. give me all item-choices which are Red. It would also allow me to use the Parse Dashboard to quickly add elements to the site as I could easily specify more ItemPropertys and ItemPropertyOptions, rather than having to add them in the codebase.
This is just a small example and there's many more instances where I'd like to use classes so that 'options' for various drop-downs in forms are in the database and can easily be added and edited by me, rather than hard-coded.
1) I’ll probably be doing this in a similar way for 5+ more similar kinds of class-structures
2) there could be hundreds of nested properties that I want to access via ‘inverse querying’
So, I can think of 2 potential causes of inefficiency and wanted to know if they’re founded:
Is having lots of classes inefficient?
Is back-querying against nested classes inefficient?
The other option I can think of — if ‘class-bloat’ really is a problem — is to make fields on parent classes that, instead of being nested across other classes (that represent further properties, as above), just representing them as a nested JSON property directly.
The job of designing is to render in object descriptions truths about the world that are relevent to the system's requirements. In the world of the OP's "items", it's a fact that items have color, and it's a relevant fact because users care about an item's color. You'd only call a system inefficient if it consumes computing resources that it doesn't need to consume.
So, for something like a configurator, the fact that we have items, and that those items have properties, and those properties have an enumerable set of possible values sounds like a perfectly rational design.
Is it inefficient or "bloated"? The only place I'd raise doubt is in the explicit assertion that items have properties. Of course they do, but that's natively true of javascript objects and parse entities.
In other words, you might be able to get along with just item and several flavors of propertyOptions: e.g. Item has an attribute called "colorProperty" that is a pointer to an instance of "ColorProperty" (whose instances have a name property like 'red', 'green', etc. and maybe describe other pertinent facts, like a more precise description in RGB form).
There's nothing wrong with lots of classes if they represent relevant truth. Do that first. You might discover empirically that your design is too resource consumptive (I doubt you will in this case), at which point we'd start looking for cheats to be somehow skinnier. But do it the right way first, cheat later only if you must.
Is having lots of classes inefficient?
It's certainly inefficient for poor humans who have to remember what all those classes do and how they're related to each other. It takes time to write all those classes in the first place, and every line that you write is a line that has to be maintained.
Beyond that, there's certainly some cost for each class in any OOP language, and creating more classes than you really need will mean that you're paying more than you need to for the work that you're doing, which is pretty much the definition of inefficient.
I’ll probably be doing this in a similar way for 5+ more similar kinds of class-structures
Maybe you could spend some time thinking about the similarity between these cases and come up with a single set of more flexible classes that you can use in all those cases. Writing general code is harder than writing very specific code, but if you do a good job you'll recoup the extra effort many times over through reuse.
I'm building a DSL that uses names for procedures (essentially) that must be unique.
It's unclear what sort of error term to use to represent a second definition.
existence_error sorta kinda fits, but I'm uncomfortable with it. It seems to imply missing definition, not multiple definition.
permission_error(modify, procedure, Name/Arity) seems promising, but seems to imply "some people could do this, but not you". Without further enlightenment, I'll use this.
syntax_error sorta kinda fits, but is defined as being for read_term only.
Should I define my own here? The spec says 'use these when you can'.
In the old times there was no SWISH or Pengines where a Prolog prozessor was used by multiple users, and probably through batch processing there wasn't much awareness that resources could be blocked by other users. So the explanation of the error term permission_error/3 is mostlikely as SICStus Prolog describes it here:
"A permission error occurs when an operation is attempted that is
among the kinds of operation that the system is in general capable of
performing, and among the kinds that you are in general allowed to
request, but this particular time it isn't permitted."
http://sicstus.sics.se/sicstus/docs/4.0.4/html/sicstus/ref_002dere_002derr_002dper.html
But I agree, from the name of the error term, we would expect its application range only some violation of access or modification rules, and not some semantic restrictions over syntactic structure such as a DSL.
But you are probably not the only one that has these problems. If your Prolog system has a messaging subsystem, where you can easily associate error terms with user friendly text, I don't see any reasons to not introduce new error terms.
You could adopt the follow error terms already suggested by SICStus Prolog and not found in the ISO core standard:
"A consistency error occurs when two otherwise valid values or
operations have been specified that are inconsistent with each other."
http://sicstus.sics.se/sicstus/docs/4.0.4/html/sicstus/ref_002dere_002derr_002dcns.html
"A context error occurs when a goal or declaration appears in the wrong
place. There may or may not be anything wrong with the goal or
declaration as such; the point is that it is out of place."
http://sicstus.sics.se/sicstus/docs/4.0.4/html/sicstus/ref_002dere_002derr_002dcon.html
Especially SWI-Prolog has such a messaging subsystem and SWI-Prolog has long said good-bye to interoperability with other Prolog systems. So the only danger if you would use SWI-Prologs messaging is a certain lock-in, which might not bother you.
MSDN Guidelines states that class names should be Pascal cast with no special prefix, such as "C".
It is also states that names of class members, such as proprties and fields, should also be Pascal cast.
So, names ambiguity may arise in the case of a naming generic object.
for example, consider a class named "Polynom". An object instantiate from this class shuold be named "Polynom" also. Polynom = new Polynom.
Is it?
I think a more common guideline (that I have seen Microsoft themselves follow) is to name variables, including instances, camel-cased (lower first, upper all other words: variableName). So in your case, it would be polynom = new Polynom. Of course, I wouldn't actually name a variable polynom unless it had a very obvious use and only for a local space. Otherwise a variable name should describe what it does, not what type it is.
All that said, the most important part of any naming convention is not what casing goes where but that you are consistent with it. Find something that works for you and stick to it!
[Quick edit: re-reading your question again, I see that you're mainly concerned about Properties. In this case, yes, it is very common to Pascal case them, so Polynom would be resonable. But since this is a property that would be exposed to the user (otherwise why is it a property?) Please don't name it Polynom!!! Do something more descriptive, we have intellisense if we want to know the type.]
You may often see
PolyNom polyNom = new PolyNom();
Although most of the time this is not the most readable code. Is it just any old polyNom, or is it for a specific purpose. Steve McConnell sites in Code Complete that the optimal variable name length for debugging (reading code) is 10-16 characters, with 8-20 characters being about the same (pg. 262 second ed.) this gives you a lot of room to more accurately describe exactly what your variable is.
Update: Please read this question in the context of design principles, elegance, expression of intent, and especially the "signals" sent to other programmers by design choices.
I have two "views" of a set of objects. One is a dictionary/map indexing the objects by a string value. The other is a dictionary/map indexing the objects by an ordinal (ordering integer). There is no "master" collection of the objects by themselves that can serve as the authoritative source for the number of objects, but the two dictionaries should always both contain references to all the objects.
When a new item is added to the set a reference is added to both dictionaries, and then some processing needs to be done which is affected by the new total number of objects.
What should I use as the authoritative source to reference for the current size of the set of objects? It seems that all my options are flawed in one dimension or another. I can just consistently reference one of the dictionaries, but that would codify an implication of that dictionary's superiority over the other. I could add a 3rd collection, a simple list of the objects to serve as the authoritative list, but that increases redundancy. Storing a running count seems simplest, but also increases redundancy and is more brittle than referencing a collection's self-tracked count on the fly.
Is there another option that will allow me to avoid choosing the lesser evil, or will I have to accept a compromise on elegance?
I would create a class that has (at least) two collections.
A version of the collection that is
sorted by string
A version of the
collection that is sorted by ordinal
(Optional) A master collection
The class would handle the nitty gritty management:
The syncing of the contents for the collections
Standard collection actions (e.g. Allow users get the size, Add or retrieve items)
Let users get by string or ordinal
That way you can use the same collection wherever you need either behavior, but still abstract away the "indexing" behavior you are going for.
The separate class gives you a single interface with which to explain your intent regarding how this class is to be used.
I'd suggest encapsulation: create a class that hides the "management" details (such as the current count) and use it to expose immutable "views" of the two collections.
Clients will ask the "manglement" object for an appropriate reference to one of the collections.
Clients adding a "term" (for lack of a better word) to the collections will do so through the "manglement" object.
This way your assumptions and implementation choices are "hidden" from clients of the service and you can document that the choice of collection for size/count was arbitrary. Future maintainers can change how the count is managed without breaking clients.
BTW, yes, I meant "manglement" - my favorite malapropism for management (in any context!)
If both dictionaries contain references to every object, the count should be the same for both of them, correct? If so, just pick one and be consistent.
I don't think it is a big deal at all. Just reference the sets in the same order each time
you need to get access to them.
If you really are concerned about it you could encapsulate the collections with a wrapper that exposes the public interfaces - like
Add(item)
Count()
This way it will always be consistent and atomic - or at least you could implement it that way.
But, I don't think it is a big deal.