I'm getting started with CQRS, and thought It would make the most sense to use the Command object as the Model on my forms. I can take advantage of some of the client-side validation for the commands using DataAnnotations, client-side validation, makes it pretty clean...
My question... does this raise any problems? If my command does not have a default constructor, will this make this process impossible? Do I need to create my own CommandModelBinder that can constructor inject the aggregate ID?
Your thoughts, I can't find this technique anywhere and Im assuming because it doesn't work.
I'd recommend that you take a look at Greg Young's article on task-based UI's on how DTOs and Messages interact with your system (both client-side and server-side).
I agree with Sebastian that your Commands will match exactly with what your user interface will look like. As a result, you'll probably need to have separate DTO/Model classes and Commands. That's really not a bad thing as your Model is really the result of your query-side of the system and really shouldn't be an exact duplicate of the messages you're sending into the system.
Also, by keeping your commands separate from your model, your concern about Command constructors goes away. Your controller just collects information from the client, constructs the command and then submits it.
If you're getting started with CQRS, Greg's site (cqrsinfo.com) is pretty good, especially his 6 1/2 hours video. Yes, it's 6 1/2 hours, but it really is a great introduction and overview of what CQRS is all about. It helped me out tremendously.
Hope this helps!
Using a POST to send your command to the domain command handlers seems sensible. But it's unlikely to be the exact object you bind your interface to. Commands in the interface (e.g. Mouse clicks) will become domain commands (create user). Your GUI is most likely to be bound to the results of a Query.
For the reasons you mentioned you would create View Models which are basically your dto's sent between client and server. This way you can use all the mvc goodness like modelbinding, data annotations, etc. In you controller you would then create the command and send the command to the service bus.
I think this will help you separate concerns a little better and it would be easier to test in my opinion.
Related
In the current plan, incoming commands are handled via Function Apps, resulting in Events being sent to an Event Hub, and then materializing the views
Someone is arguing that instead of storing events in something like table storage, and materializing views based on events and snapshots, that we should:
Just stream events to a log in Azure Monitor to have auditing
We can make changes to a domain object immediately in response to a command and use the change feed as our source of events for materialized views.
He doesn’t see the advantage of even having a materialized view. Why not just use a query? Argument is we don’t expect a lot of traffic.
He wants to fulfill the whole audit log by saving events to the azure monitor log - Just an application log. Instead, that commands should just directly modify the representation of an entity in cosmos, and we'd use the change feed from CosmosDB as our domain object events, or we would create new events off of that via subscribers to that stream.
Is this actually an advantageous approach? Can ya'll think of any reasons why we wouldn't want to do that? Seems like we'd be losing something here.
He's saying we'd no longer need to be concerned with eventual consistency, as we'd have immediate consistency.
Every reference implementation I've evaluated does NOT do it the way he's suggesting. I'm not deeply versed in the advantages/disadvantages of the event sourcing / CQRS paradigm so I'm at a loss at the moment.. Currently researching furiously
This is a conceptual issue so there's not so much a code example. However, here's some references that seem to back up the approach I'm taking..
https://medium.com/#thomasweiss_io/planet-scale-event-sourcing-with-azure-cosmos-db-48a557757c8d
https://sajeetharan.com/2019/02/03/event-sourcing-with-azure-eventhub-and-cosmosdb/
https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
If your goal is only to have the audit log, state-based persistence could be a good choice. Event sourcing adds some complexity to the implementation side and unless you can identify more advantages of using it, you might not convince your team to bring this complexity to the system. There are numerous questions and answers on SO, as well as in some blog posts, about pros and cons of event sourcing, so I won't get into that discussion here.
I can warn you, though, that the second article in your list is very weak and would most probably lead you to many difficulties. The role of Event Hub there is completely unclear and it doesn't explain anything about projections and read-models (what you call "materialised views"). Only a very limited number of use-cases can live with only getting one entity by id and without being able to execute a query across multiple entities. That also probably answers your concern of having read-models at all. You will need them very soon when for the first time you will start figuring out how to get a list of entities based on some condition (query).
Using CosmosDb as the event store is completely feasible, as described in the first article if you can manage the costs involved. Just remember to set the change feed TTL to -1, otherwise, you won't be able to replay your projections when you need to.
To summarise:
Keeping the audit log can be done without event-sourcing, but you need to ensure that events are published reliably, preferably in the same transaction as the entity state update. It is often hard or impossible but you might accept the risk of your audit requirement is not strict. You can also base your audit log on the CosmosDb change feed, just collecting document changes and logging them somewhere.
Event sourcing is a powerful technique but it has both pros and cons. The most common prejudice against using event sourcing is its implementation complexity. It might not be a big issue if you have a team that is somewhat experienced in building event-sourced systems. If you don't have such a team, you might want to build a small-scale spike to get some experience.
If you don't get full buy-in from the team to use event sourcing, you will later get all the blame if anything goes wrong. And it will go wrong at some point, especially with little experience in this area.
Spend some time reading books and trying out things yourself, before going wild in production.
Don't use Event Hub for anything that it is not designed for. Event Hub is the powerful event ingestion transport with limited TTL and it should be used for that purpose.
Don't use Table Storage as the event store, unless you only read entities by id. I used it in production for such a scenario and it worked (to some extent) but you can't project read-models from there.
A simple rule of thumb is to not use products for tasks they weren't designed for.
Azure Monitor was not designed to store application domain data. Azure Monitor is designed to store telemetry data from your applications and services and provides features such as alerts and other types of integration into DevOps tools for managing the operation and health of your apps.
There is a simple reason why you were able to find articles on event sourcing using Cosmos DB and why our own docs talk about it. Because it was designed to be used this way. It is simple to set up Cosmos DB to be an append only event store for your applications and use Change Feed to fire off messages in other apps or services or, in your case, to maintain a materialized view state of domain objects within your app.
There's no doubt to the benefits of API Gatway+Lambda for a micro-services.
My concern is what would happen if we decide to move off API Gateway+Lambda to ECS/Fargate, or even another Cloud.
There seems to be a consensus on using one Lambda function per route/action.
I have some theories about how to design using this approach such that the code can be unplugged from Lambda and plugged-in some where else.
I would also like to know what others in the community have done to achieve this? Has anyone attempted to move the API off Lambda and was able to successfully do it using XXXX design? What are the lessons there?
The language should not really matter to this discussion but we are using python3
What you're facing right now has a name. It's called "vendor lock in". Pretty much nothing you can do about it.
However, I find it useful to treat AWS Lambda handler function as a controller in your web server. What would you do in your controller? You'd validate incoming data, pass it to service layer and then serialize response from the service and pass it back to API Gateway. Long story short, your handler function should not contain business logic, which makes it easy to migrate even from serverless to servers. It's also can be good because it leaves some place for optimization. If you end up seeing that your service layer architecture adds significant time to cold start, then just denormalize it to a single file. It'll work faster, but you'll sacrifice code maintainability. There is no silver bullet, software has always been about trade offs. :)
I have an application that lends itself to an event/listener model. Several different kinds of data get published (event), then many different things may or may not need to act on that data (listeners). There's no specific order the listeners need to happen in and each listener would determine whether or not it needs to act on the event.
What tools for Rails apps are there to accomplish this task? I'm hoping to not have to do this myself (although, I can. It's not THAT big a deal.)
Edit: Observer pattern might be a better choice for this
Check out EventMachine. It is a very popular event-processing library for Ruby. It looks quite good, and a lot of other libraries seem to take advantage of it (Cramp).
Here is a good introduction: http://rubylearning.com/blog/2010/10/01/an-introduction-to-eventmachine-and-how-to-avoid-callback-spaghetti/
You'll probably want to hook into ActiveRecord's Observer class.
http://api.rubyonrails.org/v3.2.13/classes/ActiveRecord/Observer.html
With it, your models can execute custom logic for several lifecycle events:
http://api.rubyonrails.org/classes/ActiveRecord/Callbacks.html
If I understand your intent correctly, all you'll need to do is call the methods that represent your listener's action to an event from those callbacks.
You may want to use ActiveSupport::Notifications.instrument.
It is a general-purpose bridge for decoupling event sending to event reacting. It's geared towards executing all listeners during a single web request, unlike EventMachine, which is geared towards having lots of concurrent things happening.
I have created a ruby gem responding exactly to this use case : event_dispatcher
This gem provides a simple observer implementation, allowing you to subscribe and listen for events in your application with a simple and effective way.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
What are some best practices to keep in mind when developing a script program that could be integrated with a GUI, probably by somebody else, in the future?
Possible scenario:
I develop a fancy python CLI program that scrapes every unicorn images from the web
I decide to publish it on github
A unicorn fan programmer decides to take the sources and build a GUI on them
he/she gives up because my code is not reusable
How to prevent the step four letting the unicorn fan programmer build his/her GUI without too much hassle?
You do it by applying a good portion of layering (maybe implementing the MVP pattern) and treating your CLI as a UI in it's own right.
UPDATE
This text from the wikipedia article about the Model-View-Presenter pattern explains it quite well.
Model-view-presenter (MVP) is a user
interface design pattern engineered to
facilitate automated unit testing and
improve the separation of concerns in
presentation logic.
The model is an interface defining the data to be displayed or
otherwise acted upon in the user
interface.
The view is an interface that displays data (the model) and routes
user commands (events) to the
presenter to act upon that data.
The presenter acts upon the model and the view. It retrieves data
from repositories (the model),
persists it, and formats it for
display in the view.
The main point being that you need to work on separation of concern in your application.
Your CLI would be one implementation of a view, whereas the unicorn fan would implement another view for a rich client. The unicorn fan, would base his view on the same presenters as your CLI. If those presenters are not sufficient for his rich client he could easily add more, because each presenter is based on data from the model. The model, in turn, is where all the core logic of your application is based. Designing a good model is an entire subject in itself. You may be interested in reading, for example, about Domain-Driven Design, even though I don't know how well it applies to your current application. But it's interesting reading anyway.
As you can see, the wikipedia article on MVP also talks about testability, which is also crucial if you want to provide a robust framework for others to build on. To reach a high level of testability in your code-base, it is often a good idea to use some kind of Dependency Injection framework.
I hope this gives you a general idea of the techniques you need to employ, although I understand that it may be a little overwhelming. Don't hesitate to ask if you have any further doubts.
/Klaus
This sounds like a question about how to write usable code.
When considering reusablility of code, generally speaking, one should try to:
separate functionality into modules
have a well-defined interface
Separating functionality into modules
One should try to separate code into parts that have a simple responsibility. For example, a program that goes out to the internet to scrape pictures of unicorns may be separated into sections that a) scrapes the web for images, b) determines if an image is a unicorn and c) stores the said unicorn images into some specified location.
Have a well-defined interface
Having a well-designed interface, an API (application programming interface), is going to be crucial to providing a way to reuse or extend an application.
Providing entry points into each functionality will allow other programmers to actually write a new user interface for the provided functionality.
The solution to this kind of problem is very simple, but in practice, a lot of junior programmers have trouble with this pattern. Here's the solution:
You design a unicorn-scraping API. This is the hard step; good API design is insanely hard, and there aren't many examples to study. One API that I think is worth studying is the one in Dave Hanson's book C Interfaces and Implementations.
Then you design your command-line interface. If the functionality you are exposing is not to complicated, this is not too hard. But if it's complicated, you may want to think seriously about managing your API using an embedded scripting language like Lua or Tcl and designing an interface for scripting rather than for the command line.
Finally you write your command-line processing code and glue everything together.
Your hypothetical successor builds his or her GUI in one of two ways: using the embedded scripting languages, or directly on top of your API.
As noted in other answers, model/view/controller may be a good pattern to use in designing your API.
You'll be taking input, executing an action, and presenting output. It might be a good idea to use a callback mechanism (such as event handlers, passing a method as a parameter, or passing this/self to the called class) to decouple the input and output methods from the execution of the action.
Aside from this, program to an interface, not to an implementation - the essence of MVC/MVP, as klausbyskov mentioned. e.g., Don't directly call file.write(); make myModel.saveMyData() which calls file.write, so someone else can make a somebodysModel.saveMyData() that writes to a database.
I have recently designed a web application that I would like to write in Ruby. Coming from a ASP background I designed it with method and fields and linked them together (in my diagram and UML) like I would do it in C#.
However, now that I've moved from a single app to MVC I have no idea where my code goes or how the pieces are linked.
For example, my application basically collects information from various sources for users, and when they log in the information is presented to them with "new" information (information collected since last login) is tagged specially in the interface.
In C# I would have a main loop that waits let's say 5 minutes and does the collection, then when a client tries to connect it would spawn a new thread that generates the page with the new information. Now that I'm moving to Ruby I'm not sure how to achieve the same result.
I understand that the controller connects the model to the view and I thus assume this is where my code goes yet I've haven't seen a tutorial that talks about doing what I've mentioned. If someone could point me to one or tell me precisely what I need to do to turn my pseudocode into production code I'd be extremely grateful and probably will still have hair: D
EDIT: Somehow I forgot to mention that I'll be using the Rails framework. I don't really like Ruby but RoR is so nice together that I think I can put up with it.
The part of your application that is retrieving the data at certain interval shouldn't be, strictly speaking, part of the web application. In Unix world (including Rails), it would be implemented either as a daemon process, or a cron job. On Windows, I presume that Windows service is the right tool.
Regarding C# -> Ruby transition, if that's purely for Rails, I'd listen to the George's advice and give ASP.NET MVC a shot, as it resembles Rails logic pretty closely (some would call it a ripoff, I guess ;)). However, learning a new language, especially so different than C# as Ruby is, is always a good idea and a way to improve yourself as a developer.
I realize you want to move to Ruby; but you may want to give ASP.NET MVC a shot. It's the MVC framework on the ASP.NET platform.
Coming from ASP, you're going to have to do a lot of conversion to change your code to become more modular. Much more than any one post on Stack Overflow will do justice.
MVC is made up into 'tiers':
Model - Your Data
View - What the user Sees
Controller - Handles requests and communicates with the View and Model.
Pick up a book on ASP.NET MVC 1.0, and do some research on the MVC pattern. It's worth it.
Whatever Ruby web framework you plan to use (Rails, merb, Sinatra), it sounds like the portion that collects this data would typically be handled by a background task. Your models would be representations of this data, and the rest of your web app would be pretty standard.
There are some good Railscast episodes on performing tasks in the background:
Rake in Background
Starling and Workling
Custom Daemon
Delayed Job
There are other options for performing tasks in the background (such as using a message queue and the ActiveMessaging plugin) but these screen casts will at least give you a feel for how background jobs are generally approached in Rails.
If you perform these tasks on a regular schedule, there are tools for that as well.
I hope this is of some help.
Check out Rails for .NET Developers. I've heard good things about this book and it sounds like it's just what you're looking for.