I'm currently designing a new enterprise system. The system's purpose is to track, display, and notify employees of customer's interactions (i.e. events) with the company. Using an event source pattern to keep a ledger of all the customer interactions/events being collected seems like a very good fit, since all of our additional domain objects are derived from the stream of events. However, I came across an article saying that a whole-system based off of event sourcing is an anti-pattern. Why would this be?
https://www.infoq.com/news/2016/04/event-sourcing-anti-pattern
The article is indeed summarises the Greg's talk "A Decade of DDD, CQRS, Event Sourcing" at DDD Europe 2016.
I personally dislike the title of this summary since this is definitely not the point of Greg's talk. Basically, as usual, it depends.
When Greg talks about the system, he means the whole thing. This thing, in DDD terms, has a context map, with multiple bounded contexts in place. Usually, on this context map you can identify subdomains, where one or more can be in addition identified as core domain(s).
When you have your core domain - there will be a good fit for advanced techniques, would this be more traditional DDD tactical patterns like aggregates, or "fancier" stuff like Event-Sourcing. The implementation indeed need to be based on the context needs.
From what you describe, you have a good fit for Event-Sourcing. But you might think about other parts of your system, for example, customer/contact management and employee management. These details should come from somewhere. May be these are CRUD candidates? So if your core domain in this case is to track interactions between employees and customers, some sort of CRM, you can decide to build that part using Event-Sourcing and other parts of your system using less advanced techniques.
Remember putting all parts on the context maps anyway, including external systems, then you will see that the system word means in the article and the talk.
The article cites a talk by Greg Young. The relevant section is viewable here.
Young explains that CRUD hides "all kinds of crazy use cases", and gives correcting typos as an example.
He also points out that analysis can be more expensive in an event-sourced system.
In general, having immutable events as the source of truth for a given part of a system, separated from read models, carries costs and should not be adopted blindly.
Young suggests that "something more like event-driven" would be a top-level architecture rather than CQRS/event sourcing.
Related
I wish to document code for a fairly complex algorithm whose implementation is intertwined across several methods and classes. A sequence diagram cannot really describe the detail of each method well so I am looking at an activity diagram; however this doesn't typically seem to represent classes and methods, only the logic.
Is there a common or even proper way to show which methods the logic belongs to? I do not need to follow strict UML, the purpose is simply to make it clear what's happening visually.
Activity diagrams and partitions
The activity diagram are supposed to models activities without necessarily relating them to classes/objects. They are however very suitable for modeling complex algorithms, as they allows to show :
the control flow -- flowchart diagrams have proven effective for modeling alorthims; activity diagrams are much more precise in their semantic and can do this as well.
object flows that show what objects are passed through
there are object actions that are specifically meant for dealing with objects
Activity diagrams support partitions, i.e. visual grouping of activities according to a criteria. A popular use is to split actions by using subsystems as grouping criteria. Breaking down by classes seems overkill, but nothing forbids such groupings if it helps, and if you're able to use it consistently.
Interaction overview diagrams
Interaction Overview Diagrams are specialization of Activity Diagrams that represent Interactions.
Interaction overview diagrams are a kind of combination of activity and sequence diagrams. The general idea is to use some activity modeling features for showing the big picture of the flows, between interaction nodes, i.e. mini sequence diagrams embedded in the larger diagram, to show which objects are involved and what messages they exchange.
There is an inspiring example here. But be careful: the linked website is based on a former version of UML and the text is not fully up-to-date. The most accurate source on these diagrams is section 17.10 of the UML 2.5.1 specs.
Additional thoughts
Instead of trying to show everything in one diagram, you may prefer the beauty of simplicity: use an easy to understand, simpler overview diagram, and uncover the complexity in additional diagrams that focus on details. This works with activity diagram complements with more detailed activities or sequence diagrams. But it also works the other way round, showing exchanges between key objects in a sequence diagram, and provide more detailed activity diagrams to describe what happens inside of one of the invoked operation.
Disclaimer: While I provide some hints that you can use to model algorithms in relation with their classes, I have at the same time to warn you that visual programming, i.e. a very very very detailed modeling, may lead to very complex diagrams that are difficult to read, and even more difficult to maintain. They loose the benefit of communicating the big picture. I'm not alone in this criticism, see Grady Booch who knows what he is talking about, since he is one of the co-inventor of UML, Martin Fowler, and many others
I am using the SIU^S12 segment and I need to indicate the financial entity (IN1). But IN1 is not allowed in segment SIU^S12. Has anyone ever experienced this?
You're right that financial information is not included in the SIU^S12 message, nor is it in the entire HL7v2 scheduling domain. I can't really attest to the "why" behind this, but only share my experience in the US domain.
From a very high level, in the United States, scheduling HL7v2 interfaces are used almost exclusively before the patient arrives, and ADT (HL7v2 Chapter 3) is used almost exclusively once the patient arrives. This is not ideal, and sometimes bears extra licensing costs in terms of getting two HL7v2 interfaces.
From a design perspective, I it makes sense to have a separation of concerns - SIU^S12 is chiefly concerned with scheduling resources - even treating patients as resources. SIU^S12 can have multiple patients in its schema, whereas ADT^A01 must have exactly one. While it would be possible to attach GT1, IN1/2/3 to each potential patient in SIU, it's easier to wrap your head around when only one patient is in play.
From a workflow perspective, insurance/payment information is typically verified with the patient in person once they arrive, so the majority of use cases won't need or more importantly trust insurance information were it to be sent by a scheduling system.
Basically, after hours of researching I still dont get what the Unifying Logic Layer within Semantic Web Stack Model is and whose problem it is to take care of it.
I think this depends on what your conceptualisation of the semantic web is. Suppose the ultimate expression of the semantic web is to make heterogeneous information sources available via web-like publishing mechanisms to allow programs - agents - to consume them in order to satisfy some high-level user goal in an autonomous fashion. This is close to Berners-Lee et al's original conceptualisation of the purpose of the semantic web. In this case, the agents need to know that the information they get from RDF triple stores, SPARQL end-points, rule bases, etc, is reliable, accurate and trustworthy. The semantic web stack postulates that a necessary step to getting to that end-point is to have a logic, or collection of logics, that the agent can use when reasoning about the knowledge it has acquired. It's rather a strong AI view, or well towards that end of the spectrum.
However, there's an alternative conceptualisation (and, in fact, there are probably many) in which the top layers of the semantic web stack, including unifying logic, are not needed, because that's not what we're asking agents to do. In this view, the semantic web is a way of publishing disaggregated, meaningful information for consumption by programs but not autonomously. It's the developers and/or the users who choose, for example, what information to treat as trustworthy. This is the linked data perspective, and it follows that the current stack of standards and technologies is perfectly adequate for building useful applications. Indeed, some argue that even well-established standards like OWL are not necessary for building linked-data applications, though personally I find it essential.
As to whose responsibility it is, if you take the former view it's something the software agent community is already working on, and if you take the latter view it doesn't matter whether something ever gets standardised because we can proceed to build useful functionality without it.
I'm searching for sources and further information on a particular concept in user experience design. It's not a particularly complicated concept, just that when designing user interfaces, you should both make it intuitive and simple for new users, but also provide way for users to become more efficient as they become more familiar with the application.
An example could be including a prominent button for a common action for new users, but also providing a keyboard shortcut / mnemonic for expert users. However, that's just an example, another example could be providing full functionality through a GUI, but allow expert users to script the same actions. The point is it's more difficult to learn, but it makes them more efficient.
I'm pretty sure there's a name for that which I can't recall, and I'm having trouble searching for sources and references on it.
Name of the concept of designing an interface to allow expert users to become more efficient?
Accelerators?
Flexibility and efficiency of use:
Accelerators -- unseen by the novice
user -- may often speed up the
interaction for the expert user such
that the system can cater to both
inexperienced and experienced users.
Allow users to tailor frequent
actions.
(source: Ten Usability Heuristics by Jakob Nielsen)
Well, reading only your question "Name of the concept of designing an interface to allow expert users to become more efficient?" I'm inclined to point you toward The Humane Interface: New Directions for Designing Interactive Systems by Jef Raskin, in which there is the concept of habituation:
2-3-1 Formation of Habits
When you perform a task repeatedly, it
tends to become easier to do.
Juggling, table tennis, and playing
piano are everyday examples in my
life; they all seemed impossible when
I first attempted them. Walking is a
more widely practiced example. With
repetition, or practice, your
competence becomes habitual, and you
can do the task without having to
think about it. ...
...
... The ideal humane interface would
reduce the interface component of a
user's work to benign habituation.
Many of the problems that make
products difficult and unpleasant to
use are caused by human-machine design
that fails to take into account the
helpful and injurious properties of
habit formation. One notable example
is the tendency to provide many ways
of accomplishing the same task. Having
multiple options can shift your locus
of attention from the task to the
choice of method...
But is contrary to what you describe in your question, as evidenced by the last 2 sentences. In fact in that book there is also a sub-chapter dedicated to dispel the myth of beginner-expert dichotomy:
3-6 Myth of the Beginner-Expert Dichotomy
... This dichotomy is invalid. As a user
of a complex system, you are neither
a beginner nor an expert, and you cannot
be placed on a single continuum between
these two poles. You independently know
or do not know each feature or each related
set of features that work similarly to one
another. You may know how to use many
commands and features of a software package;
you may even work with the package professionally,
and people may seek your advice on using it.
Yet you may not know how to use or even know
about the existence of certain other commands
or even whole categories of commands in that
same package. ...
So, perhaps is not such a good term/concept that you are looking for.
Update: were you looking for the term Adaptive User Interfaces, perhaps? Well, I think that, as usually understood and implemented, it is not such a great idea (for example, disappearing menu items in Microsoft products). But my impression is that researchers use the term for something quite different.
Update: but Adaptive User Interfaces does not cover scripting.
The answer is in your question: Efficiency. It's a fundamental component of usability that Jakob Nielsen long ago defined as "Once users have learned the design, how quickly can they perform tasks." A UI with expert-supporting elements like accelerators, context menus, and double-click-for-defaults is an efficient UI.
It is also correct to simply say that making things fast for experienced users is part of usability -just as usability also includes making it easy for users to accomplish basic tasks on the first encounter, and making it satisfying, and tolerating errors.
I was reading Code Complete (2nd Edition), and came across a quote in the margin on page 87 by Bertrand Meyer.
Ask not first what the system does; ask WHAT it does it to!
What exactly is the point Mr. Meyer is trying to get across here. I have some rough ideas, but I would like to make sure I really understand.
... So this is the second fallacy of teleology
- to attribute goal-directed
behavior to things that are not
goal-directed, perhaps without even
thinking of the things as alive and
spirit-inhabited, but only thinking, X
happens in order to Y. "In order to"
is mentalistic language, even though
it doesn't seem to name a blatantly
mental property like "fearful" or
"thinks it can fly". — Eliezer Yudkowsky, artificial intelligence theorist
concerned with self-improving AIs with stable goal systems
Bertrand Meyer's homily suggests that sound reasoning about systems is grounded in knowing what concrete entities are altered by the system; the purpose of the alterations is an emergent property.
I believe the point here is not on what the system does, but on the data it operates on and what those operations are.
This provides two major thinking shifts:
You think of the data and concepts first
You think of operations on that data
With those two "baselines" you will better prepared to organize a system to achieve your goals so that operations on data are well understood and make sense.
In effect, he is laying the ground work to be able to write the "contracts" on the code you write.
From Google search it picked up Art Gittleman's Computing With C# and the .Net Framework:
Bertrand Meyer gives an example of
payroll program, which produces
paychecks from timecards. Management
may later want to extend this program
to produce statistics or tax
information. The payroll function
itself may need to be changed to
produce weekly checks instead of
biweekly checks, for example. The
procedures used to implement the
original payroll program would need to
be changed to make any of these
modifications. Meyer notes that any of
these payroll programs will manipulate
the same sort of data, employee
records, company regulations, and so
forth.
Focusing on the more stable
aspect of such systems, Mayer states a
principle: "Ask not first what the
system does: Ask WHAT it does to!";
and a definition: "Object-oriented
design is the method which leads to
software architectures based on
objects every system or subsystem
manipulates (rather than "the"
function it meant to ensure)."
We today take UML's class diagram and other OOAD approach for granted, but it was something that was "discovered" along the way.
Also see Object-Oriented Design.
My opinion is that the quote is meant as a method to find good abstractions in your software. The text next to this quote deals with finding real-world objects to design your classes.
An simple example would be something like this:
You are making software for a bank. Because your software is working with bank accounts, it should have a class for an account. Then you start thinking what properties accounts have and the interactions you can have with accounts.
Of course, this quote makes more sense if the objects you are trying to model aren't as clear as this case.
Fred Brooks stated it this way:
"Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowcharts; they'll be obvious."
Domain-Driven design... Understand the problem the software is designed to solve. What "domain" entities, (data abstractions) does the system manipulate ? And what does it do to those domain entities?