Should all MassTransit messages be correlated in general? - masstransit

I'm trying to figure out when to use message correlation. It seems to me like it would be best to make a coding standard that all messages should be correlated unless you have some reason not to.
Is there any reason why this would be a bad idea?

You really should have some way to correlation messages for a number of reason. Implementing CorrelatedBy<T> isn't really a requirement. I would just use whatever makes sense for your system. It could just be a TransactionId or something similar.
However, for tracking, research, idempotence, and sagas you do really want some way to correlate messages. Like I said, just whatever makes sense to your world.

Related

ElasticSearch as EventStore

I would like to ask you about your opinion and what you see as cons and pros to use ElasticSearch as EventStore.
I would like to hear if somebody had experience with using ElasticSearch as event store and what was the results, reliability and if there was any issues.
eta: Reminded of this post by a no-explanation-down-voter... This Kafka is not for Event Sourcing post is pretty useful when thinking through this sort of a question.
Elastic search is simply not designed to be an authoritative persistent store for actual app state.
Neither is Redis.
Neither is Kafka.
All three may potentially be useful in the context of an app which does employ an event store.
May I suggest reading a book like NoSql distilled to get an idea of selection criteria in this space so you'll know how to select something appropriate.
Also for a question like this to be more answerable (in general questions like this are considered subjective and get closed), it's important to supply context as to what sort of stuff you're going to maintain, what sort of access patterns, what sort of scale and/or any other context. A laundrly list of contextless pro/cons is not going to help much.

write only stream

I'm using joliver/EventStore library and trying to find a way of how to get a stream not reading any events from it.
The reason is that I want just to write some events into that store for specific stream without loading all 10k messages from it.
The way you're expected to use the store is that you always do a GetById first. Even if you new up an Aggregate and Save it, you'll see in the CommonDomain EventStoreRepository that it will first correlate it with the existing data.
The key reason why a read is needed first is that the infrastructure needs to work out how many events have gone before to compute the new commit sequence number.
Regarding your citing of your example threshold that makes you want to optimize this away... If you're really going to have that level of events, you'll already be into snapshotting territory as you'll need to have an appropriately efficient way of doing things other than blind write too.
Even if you're not intending to lean on snapshotting, half the benefit of using EventStore is that the facility is buitl in for when you need it.

Is there a preferred way to design signal or event APIs in Go?

I am designing a package where I want to provide an API based on the observer pattern: that is, there are points where I'd like to emit a signal that will trigger zero or more interested parties. Those interested parties shouldn't necessarily need to know about each other.
I know I can implement an API like this from scratch (e.g. using a collection of channels or callback functions), but was wondering if there was a preferred way of structuring such APIs.
In many of the languages or frameworks I've played with, there has been standard ways to build these APIs so that they behave the way users expect: e.g. the g_signal_* functions for glib based applications, events and addEventListener() for JavaScript DOM apps, or multicast delegates for .NET.
Is there anything similar for Go? If not, is there some other way of structuring this type of API that is more idiomatic in Go?
I would say that a goroutine receiving from a channel is an analogue of an observer to a certain extent. An idiomatic way to expose events in Go would be thus IMHO to return channels from a package (function). Another observation is that callbacks are not used too often in Go programs. One of the reasons is also the existence of the powerful select statement.
As a final note: some people (me too) consider GoF patterns as Go antipatterns.
Go gives you a lot of tools for designing a signal api.
First you have to decide a few things:
Do you want a push or a pull model? eg. Does the publisher push events to the subscribers or do the subscribers pull events from the publisher?
If you want a push system then having the subscribers give the publisher a channel to send messages on would work really well. If you want a pull method then just a message box guarded with a mutex would work. Other than that without knowing more about your requirements it's hard to give much more detail.
I needed an "observer pattern" type thing in a couple of projects. Here's a reusable example from a recent project.
It's got a corresponding test that shows how to use it.
The basic theory is that an event emitter calls Submit with some piece of data whenever something interesting occurs. Any client that wants to be aware of that event will Register a channel it reads the event data off of. This channel you registered can be used in a select loop, or you can read it directly (or buffer and poll it).
When you're done, you Unregister.
It's not perfect for all cases (e.g. I may want a force-unregister type of event for slow observers), but it works where I use it.
I would say there is no standard way of doing this because channels are built into the language. There is no channel library with standard ways of doing things with channels, there are simply channels. Having channels as built in first class objects frees you from having to work with standard techniques and lets you solve problems in the simplest most natural way.
There is a basic Golang version of Node EventEmitter at https://github.com/chuckpreslar/emission
See http://itjumpstart.wordpress.com/2014/11/21/eventemitter-in-go/

What is the name of this anti-pattern?

Surely some of you have dealt with this one. It tends to happen when programmers get a bit too taken by OO and forget about performance and having a database.
For an example, lets say we have an Email table and they need to be sent by this program. At start-up, it looks for anything that needs to be sent as follows:
Emails = find_every_damn_email_in_the_database();
FOR Email in Emails
IF !Email.IsSent() THEN Email.Send()
This is a good from a do-not-repeat-yourself perspective, but sometimes it's unavoidable and it should be:
Emails = find_unsent_emails();
FOR Email in Emails
Email.Send()
Is there a name of this one?
I'll have a go at it and coin the name "the lazy filter (anti) pattern".
I saw that once. That programmer wasn't around too long.
We called that the "firehose method".
To me it's Joel Spolsky's leaky abstraction.
It's not exactly an anti-pattern, but whoever wrote this code, didn't really understand where Active Record pattern abstraction leaks.
I call that "The Shotgun Approach".
I'm not sure this is necessarily database related, since you could have a complex and expensive procedure (e.g., more than a flag) for applying a filter for a group.
I don't think there's a name to it, since the first design is simply not good, and it violates the one-responsibility-only principle. If you search, filter, and print the filtered you are doing multiple things, so you need to refactor it into "searched filtered" and print.
The only thing different than a simple refactoring here is that it also affects performance, in the same way that inner loops can be designed in ways that harm performance.
Appear to have derived from the following anti-patterns:
Standing On The Shoulders Of Midgets
If It Is Working Dont Change
The original developer would have possibly not been allowed to write the find_unsent_emails() implementation, and would therefore have reused the midget function. And then, why change it after development and testing?
This is frequently due to it being a lot easier to use an existing query and then filtering in code than getting a new SQL query added. Maybe because the DBAs control all queries and getting a new query approved takes days, or maybe because the ORM tool you're using makes it very difficult to define your own custom queries.
If I were to name it I'd call it the "Easy Way Out" (anti)pattern. Whether it's an antipattern or not really depends on the individual situation. If it will always be a fairly small number of items you need to retrieve, doing the filtering in code really isn't a big problem. But if the number of items is large and has the potential to continually grow, then obviously the filtering should be done on the server.
I've seen similar issues elsewhere, where instead of a simple array of things to do, there was a "transaction cluster" based on a "list cluster" based on a "collection cluster" based on a "memory cluster". Needless to say, the simplest thing turned into a great big freakin' deal.
I called it galloping generality.
Stoopid Amateurs.
Seriously, I've only seen this one in people with Computer Science degrees and no professional experience at all. When I was teaching at Duke, my advisor and I ran a "Large Scale Programming" class where we made people look at exactly these sorts of errors.
The performance of the first one can actually be fine, depending on the type of Emails. If it's just an iterator (think of std::vector::begin() in C++) then it's fine and better than storing all unsent e-mails in some container first.
This antipattern has several possible names.
"Don't-know-SQL" antipattern
"Fascist-DBA" antipattern
"What-does-'latency'-mean?" antipattern
There is a nice example at The Daily WTF.
Inspired partly by 1800's "the lazy filter (anti) pattern", how about "dysfunctional programming" (ie the opposite of functional programming)?

AJAX and forms?

I have some forms that communicate with server using AJAX for real reasons: cascade combos, suggestions, multiple correlated selections (e.g. I have {elementary} knowledge of {French} [add], and {good} knowledge of {German} [add]...). I also have some regular fields that I handle trough get.
Thing is that once I've made connection to server-side, it would be easier to me to push all data that way. Is it going too far? What about if I have no reason for AJAX in the first place, and I still use it for pushing form data? I would feel obligated to provide fallback for people with javascript off, but most of the underlying logic would be the same, so it doesn't seam to me as an much of an overhead. It is a kind of data I would push through post anyway, so I'm not losing get parameters that would be good for something.
Any reason not to do things this way?
User experience is an important part of any software product. If you can improve the experience by providing better interactions, there's no reason not to do it.
Make sure though that you write unobtrusive and degradable javascript, so users with screen-readers or javascript-disabled browsers can still complete the interactions.
The only problem with this strategy is that you're in a lot deeper trouble if someone decides they want a non-javascript solution as well. I think it's fairly wise to use the "least fancy" mechanism that will get the desired result on the web. If it's just a form post, then keep it a form post unless there's a reason to do otherwise.

Resources