What does "sourcing" in "Event Sourcing" stand for? - event-sourcing

What does "sourcing" stand for?
Is it that events are "sourced" from somewhere or that the state is "sourced" from events?
I can't seem to find any authoritative reference to this, aside from seemingly confusing use of "source" like "can't source" and "source the event".

What does "sourcing" stand for?
I would answer this way: it stands for the fact that the source of truth (aka: the book of record) is a journal of events.
As far as I can tell, the earliest references to the phrase "event sourcing" come from Martin Fowler, who wrote in 2015
But we can go further than an audit trail, into to some very interesting territory. The enabler for this occurs when all changes to a system are caused by events - an approach that I call Event Sourcing. Another way of looking at this is that Event Sourcing happens when we can entirely derive the state of an application by processing the log of Domain Events.
Note that Fowler's 2005 understanding of domain events doesn't quite align with more recent interpretations - see, for instance, Greg Young writing five years later.

Related

Amount of properties per command/event in event sourcing

I'm learning cqrs/event sourcing, and recently I listen some speach and speaker told that you need pass as few parameters to event as possible, in other words to make events tiny as possible. The main reason for that is it's impossible to change events later as it will break the event history, and its easelly to design small events correctly. But what if for example in UI you need fill in for example form with 10 fields to create new aggregate, and same situation can be with updating the aggregate? How to be in such a case? And how to be if business later consider to change something, but we have huge event which updating 10 fields?
The decision is always context-specific and each case deserves its own review of using thin events vs fat events.
The motivation for using thin domain events is to include just enough information that is required to ensure the state transition.
As for fat events, your projections might require a piece of entity state to avoid using any logic in the projection itself (best practice).
For integration, you'd prefer emitting fat events because you hardly know who will consume your event. Still, the content of the event should convey the information related to the meaning of the event itself.
References:
Putting your events on a diet
Patterns for Decoupling in Distributed Systems: Fat Event
recently I listen some speach and speaker told that you need pass as few parameters to event as possible, in other words to make events tiny as possible.
I'm not convinced that holds up. If you are looking for good ides about designing events, you should review Greg Young's e-book on versioning.
If you are event sourcing, then you are primarily concerned with ensuring that your stream of events allows you to recreate the state of your domain model. The events themselves should be representations of changes that a domain expert will recognize. If you find yourself trying to invent smaller events just to fit some artificial constraint like "no more than three properties per event" then you are going to end up with data that doesn't really match the way your domain experts think -- which is to say, technical debt.

Event source the whole system is bad

I'm learning a proper microservice architecture using CQRS, MassTransit and different type of storage for the read side. One thing which often comes along CQRS is the event sourcing. I do understand it's not mandatory at all. However, I can't think of why using it on the whole system is really an anti pattern.
Having an store for all events as a single source of truth can help you build / rebuild a read store on the fly whenever you want.
You are not locked in to any vendor (except for the event store)
For me, the question is more like is it easier to not start with event sourcing (and still have separate data storage depending on which the microservices. eg: elasticsearch, mongodb, etc etc) and migrating / provisioning whenever it's needed or on the other hand, start with event sourcing everything so that you don't have to deal with migration later on.
I can't think of why using it on the whole system is really an anti pattern.
I agree -- calling it an "anti pattern" is an overstatement.
The spelling I believe? Using event sourcing on the whole system isn't cost effective today.
It could be tomorrow, as we get more practice with it, and the costs of designing these systems goes down and we learn to extract more benefit from them.
In the mean time - how valuable are the temporal queries that you get from event sourcing? In your core domain, where you get competitive advantage, they could be quite valuable. In places where you are just doing bookkeeping of information provided to you by the outside world? Not so much - you may be getting everything you need out of simpler solutions that only keep track of "now".
I recently published a blog post about this issue. It explains why event sourcing is a persistence strategy and shouldn't be used at global scale.
To summarize it: Event Sourcing forces you to emit an event for every changed data. This can result in very fine grained events. If you use Event Sourcing for inter microservice communication, you expose those events to the outside world.
In the end you expose the your persistence layer, comparable to exposing your (relational) database schema in a CRUD based persistence strategy.

Events and Commands difference and naming conventions

I have been having some difficulties differenciation the two recently. More specificly I have browsed stackoverflow and there is a statement that Events can be named in two different ways:
with "ing" or with past tense "ed". This can be seen here Events - naming convention and style
https://learn.microsoft.com/en-us/dotnet/standard/design-guidelines/names-of-type-members
At the same time the CQRS states that names need to be in past tense and then following their guidelines the events named above with "ing" form would be commands. This gets me a bit confused? Do events mean different things in depending on the architectural context and movement. Is there a unified view on what an event and command is ?
CQRS you've been reading only had past-tense event names likely because they didn't consider pre-events. A command commands something to happen, and as such is typically formed in imperative ("click!", "fire!", "tickle!"). It makes no sense for a command to be a gerund ("clicking! clicking faster, you! or I fires you!") As it precipitates an action, it will likely trigger one or more notifications (=events) that something of note is about to happen, and afterwards that something of note did happened.
-ing events (e.g. ("Clicking") happen before the event is handled, e.g. in case someone wants to stop it. Sometimes, they are called "before" events (e.g. "BeforeClick"), or "will" events ("WillClick").
-ed events (e.g. "Clicked") happen after the event is handled, e.g. in order to affect dependents. Sometimes, they are called "after" events (e.g. "AfterClick") or "did" events ("DidClick").
Which specific scheme you follow does not really matter, as long as you (and your team, and your potential partners) are consistent about it. Since CQRS (under that name) is largely a Microsoft thing, follow what Microsoft says. Should you code for Mac, the concepts are similar - but you'd do well to go with the Apple guidelines instead.
Is there a unified view on what an event and command is ?
Unified? No, probably not. But if you want an authoritative definition, Gregor Hohpe's Enterprise Integration Patterns is a good place to start.
Command Message
Event Message
Within the context of CQRS, you should consider Greg Young's opinion to be authoritative. He is quite clear that command messages should use imperative spellings, where events use spellings of changes that completed in the past.
Names of Commands and Events should be understood to be spelling conventions -- much in the same way that the spellings of URI, or variable names, are conventions. It doesn't matter at all for correctness, and the computer is for the most part not looking at the spelling (for example, we route messages based on the message name, not by looking at the verb tense).
Events describe a change to the state of a model; all events are ModelChanged. However, we prefer to use domain specific spellings for the type of the event, so that they can be more easily discriminated: MouseClicked, ConnectionClosed, FundsTransfered, and so on.
Use of the present progressive tense spelling for an event name is weird, in so far as the message is a description of the domain model at the point of a transaction, where the present tense semantically extends past that temporal point. More loosely, present progressive describes a current state, rather than a past change of state.
That said, finding a good past tense spelling for pre-events can be hard, and ultimately it is just a spelling convention; the work required to find an accurate spelling that is consistent with the past tense convention may not pay for itself compared to taking a natural but incorrect verb tense.

What is the best practice for text in a dialog box?

This is not so much a technical question but still part of the development cycle.
I'm having to word all of my dialog boxes in this program I am working on and I was trying to get a good handle on the best practices for making text for the average end user to comprehend.
I have three core principles I could think of
Keep it short - yet long enough to explain thoroughly
Avoid personal remarks such as "keep in mind", "just so you know", etc
Call an apple an apple - If a concept is highly technical do not dumb it down with another word that doesn't fully encapsulate the idea.
Are these good principles to go by and/or is there something better to add.
There are various platform specific guidelines, e.g.
Microsoft WUXI --- Dialog text
Apple
The things I would add:
consistency - keep style and tone consistent throughout the application.
consistency - use the same names for concepts and elements of your app
drop everyting you can live without - explanations belong to online help, don't pack the dialog to tight, leave room.
Use simple words. Not all of your users are native english speakers.
Use present tense, active voice
Avoid exclamation marks
Avoid multiple exclamation marks
Yes, I believe those are great principles to go off of, but one more thing you may want to look for and be encouraged to do, is if it is highly technical and there is a way to get the same point across without using words most people wont get, consider using them, not everybody knows everything

What are some good examples showing that "I am not the user"?

I'm a software developer who has a background in usability engineering. When I studied usability engineering in grad school, one of the professors had a mantra: "You are not the user". The idea was that we need to base UI design on actual user research rather than our own ideas as to how the UI should work.
Since then I've seen some good examples that seem to prove that I'm not the user.
User trying to use an e-mail template authoring tool, and gets stuck trying to enter the pipe (|) character. Problem turns out to be that the pipe on the keyboard has a space in the middle.
In a web app, user doesn't see content below the fold. Not unusual. We tell her to scroll down. She has no idea what we're talking about and is not familiar with the scroll thumb.
I'm listening in on a tech support call. Rep tells the user to close the browser. In the background I hear the Windows shutdown jingle.
What are some other good examples of this?
EDIT: To clarify, I'm looking for examples where developers make assumptions that turn out to be horribly false about what users will know, understand, etc.
I think one of the biggest examples is that expert users tend to play with an application.
They say, "Okay, I have this tool, what can I do with it?"
Your average user sees the ecosystem of an operating system, filesystem, or application as a big scary place where they are likely to get lost and never return.
For them, everything they want to do on a computer is task-based.
"How do I burn a DVD?"
"How do I upload a photo from my camera to this website."
"How do I send my mom a song?"
They want a starting point, a reproducible work flow, and they want to do that every time they have to perform the task. They don't care about streamlining the process or finding the best way to do it, they just want one reproducible way to do it.
In building web applications, I long since learned to make the start page of my application something separate from the menus with task-based links to the main things the application did in a really big font. For the average user, this increased usability hugely.
So remember this: users don't want to "use your application", they want to get something specific done.
In my mind, the most visible example of "developers are not the user" is the common Confirmation Dialog.
In most any document based application, from the most complex (MS Word, Excel, Visual Studio) through the simplest (Notepad, Crimson Editor, UltraEdit), when you close the appliction with unsaved changes you get a dialog like this:
The text in the Untitled file has changed.
Do you want to save the changes?
[Yes] [No] [Cancel]
Assumption: Users will read the dialog
Reality: With an average reading speed of 2 words per second, this would take 9 seconds. Many users won't read the dialog at all.
Observation: Many developers read much much faster than typical users
Assumption: The available options are all equally likely.
Reality: Most (>99%) of the time users will want their changes saved.
Assumption: Users will consider the consequences before clicking a choice
Reality: The true impact of the choice will occur to users a split second after pressing the button.
Assumption: Users will care about the message being displayed.
Reality: Users are focussed on the next task they need to complete, not on the "care and feeding" of their computer.
Assumption: Users will understand that the dialog contains critical information they need to know.
Reality: Users see the dialog as a speedbump in their way and just want to get rid of it in the fastest way possible.
I definitely agree with the bolded comments in Daniel's response--most real users frequently have a goal they want to get to, and just want to reach that goal as easily and quickly as possible. Speaking from experience, this goes not only for computer novices or non-techie people but also for fairly tech-savvy users who just might not be well-versed in your particular domain or technology stack.
Too frequently I've seen customers faced with a rich set of technologies, tools, utilities, APIs, etc. but no obvious way to accomplish their high-level tasks. Sometimes this could be addressed simply with better documentation (think comprehensive walk-throughs), sometimes with some high-level wizards built on top of command-line scripts/tools, and sometimes only with a fundamental re-prioritization of the software project.
With that said... to throw another concrete example on the pile, there's the Windows start menu (excerpt from an article on The Old New Thing blog):
Back in the early days, the taskbar
didn't have a Start button.
...
But one thing kept getting kicked up
by usability tests: People booted up
the computer and just sat there,
unsure what to do next.
That's when we decided to label the
System button "Start".
It says, "You dummy. Click here." And
it sent our usability numbers through
the roof, because all of a sudden,
people knew what to click when they
wanted to do something.
As mentioned by others here, we techie folks are used to playing around with an environment, clicking on everything that can be clicked on, poking around in all available menus, etc. Family members of mine who are afraid of their computers, however, are even more afraid that they'll click on something that will "erase" their data, so they'd prefer to be given clear directions on where to click.
Many years ago, in a CMS, I stupidly assumed that no one would ever try to create a directory with a leading space in the name .... someone did, and made many other parts of the system very very sad.
On another note, trying to explain to my mother to click the Start button to turn the computer off is just a world of pain.
How about the apocryphal tech support call about the user with the broken "cup holder" (CD/ROM)?
Actually, one that bit me was cut/paste -- I always trim my text inputs now since some of my users cut/paste text from emails, etc. and end up selecting extra whitespace. My tests never considered that people would "type" in extra characters.
Today's GUIs do a pretty good job of hiding the underlying OS. But the idosyncracies still show through.
Why won't the Mac let me create a folder called "Photos: Christmas 08"?
Why do I have to "eject" a mounted disk image?
Can't I convert a JPEG to TIFF just by changing the file extension?
(The last one actually happened to me some years ago. It took forever to figure out why the TIFF wasn't loading correctly! It was at that moment that I understood why Apple used to use embedded file types (as metadata) and to this day I do not understand why they foolishly went back to file extensions. Oh, right; it's because Unix is a superior OS.)
I've seen this plenty of times, it seems to be something that always comes up. I seem to be the kind of person who can pick up on these kind of assumptions (in some circumstances), but I've been blown away by what the user was doing other many times.
As I said, it's something I'm quite familiar with. Some of the software I've worked on is used by the general public (as opposed to specially trained people) so we had to be ready for this kind of thing. Yet I've seen it not be taken into account.
A good example is a web form that needs to be completed. We need this form completed, it's important to the process. The user is no good to us if they don't complete the form, but the more information we get out of them the better. Obviously these are two conflicting demands. If just present the user a screen of 150 fields (random large number) they'll run away scared.
These forms had been revised many times in order to improve things, but users weren't asked what they wanted. Decisions were made based on the assumptions or feelings of various people, but how close those feelings were to actual customers wasn't taken into account.
I'm also going to mention the corollary to Bevan's "The users will read the dialog" assumption. Operating off the "the users don't read anything" assumption makes much more sense. Yet people who argue that the user's don't read anything will often suggest putting bits of long dry explanatory text to help users who are confused by some random poor design decision (like using checkboxes for something that should be radio buttons because you can only select one).
Working any kind of tech support can be very informative on how users do (or do not) think.
pretty much anything at the O/S level in Linux is a good example, from the choice of names ("grep" obviously means "search" to the user!) to the choice of syntax ("rm *" is good for you!)
[i'm not hatin' on linux, it's just chock full of unix-legacy un-usability examples]
How about the desktop and wallpaper metaphors? It's getting better, but 5-10 years ago was the bane of a lot of remote tech support calls.
There's also the backslash vs. slash issue, the myriad names for the various keyboard symbols, and the antiquated print screen button.
Modern operating systems are great because they all support multiple user profiles, so everyone that uses my application on the same workstation can have their own settings and user data. Only, a good portion of the support requests I get are asking how to have multiple data files under the same user account.
Back in my college days, I used to train people on how to use a computer and the internet. I'd go to their house, setup their internet service show them email and everything. Well there was this old couple (late 60's). I spent about three hours showing them how to use their computer, made sure they could connect to the internet and everything. I leave feeling very happy.
That weekend I get a frantic call, about them not being able to check their email. Now I'm in the middle of enjoying my weekend but decide to help them out, and walk through all the things, 30 minutes latter, I ask them if they have two phone lines..."of course we only have one" Needless to say they forgot that they need to connect to the internet first (Yes this was back in the day of modems).
I supposed I should have setup shortcuts like DUN - > Check Email Step 1, Eduora - Check Email Step 2....
What users don't know, they will make up. They often work with an incorrect theory of how an application works.
Especially for data entry, users tend to type much faster than developers which can cause a problem if the program is slow to react.
Story: Once upon a time, before the personal computer, there was timesharing. A timesharing company's customer rep told me that once when he was giving a "how to" class to two or three nice older women, he told them how to stop a program that was running (in case it was started in error or taking to long.) He had one of the students type ^K, and the timesharing terminal responded "Killed!". The lady nearly had a heart attack.
One problem that we have at our company is employees who don't trust the computer. If you computerize a function that they do on paper, they will continue to do it on paper, while entering the results in the computer.

Resources