Recently, I found my way to The Clean Architecture post by Uncle Bob. But when I tried to apply it to a current project, I got stuck when a usecase needed to depend on another usecase.
For example, my Domain Model is Goal and Task. One Goal can have many Tasks. When I update a Task, it needs to update the information of its parent Goal. In other words, UpdateTask usecase will have UpdateGoal usecase as a dependecy. I am not sure if this is acceptable, or, if we should avoid usecase level dependencies.
A use case is related to a functionality of your application. Generally when we need to invoke from one use case to another there is something that does not work.
When you update a goal in isolation, it is not the same scenario as when you update it by a change in a task, in fact, it is sure that not all data is updated, but a part.
Surely you will have to use the goal repository and the goal entity but it is a completely different scenario. In your case you are not duplicating logic, only calls to the repository or the entity, saving code lines can be expensive in the future.
In short, it is not a good idea to have dependence between use cases.
Related
I've read the Axon documentation and looked at all provided sample projects, especially the AxonBank which I'm referencing here, but one thing is still bothering me and is not explained as far as I see:
It is my understanding that in Axon you perform queries against a read database which represents the materialized view, e.g. a H2 that contains the latest BankAccount JPA entity (here). However if you have a Spring repository, e.g. JpaRepository<BankAccount, Long> (here), you also have the save-method which should only be used for commands. Shouldn't you split the repository into a read-only and write-only repository?
Could someone also point me the documentation what how Axon works with this repository? Because for an unitiated developer it looks like a "normal" JPA repository, i.e. the entity seems mutable and always up to date.
But from a theoretical perspective I expect an immutable entity in a zero state where a projection is created by applying all events, does this happen in the background with Axon?
What would happen if I update the entity with JpaRepository#save but not the aggregate? Will they be out of sync?
It seems that we have two sources of truth in this case, which shouldn't be the case theoretically.
let me try to help you!
What you are describing is the CQRS pattern - especially the Query side!
The Repository you mentioned is usually used on the #EventHandlers to build your projections, which will store the data the way you need it!
Looking at the AxonBank, it should be clearly visible here.
I don't think there is anything on Axon documentation specific about it but indeed this is a regular JPA Repository. Of course you can use whatever you want as your Query side.
What would happen if I update the entity with JpaRepository#save but not the aggregate? Will they be out of sync?
In this case, your view model will be updated based on anything other than events, which is not what you want. This repository should be updated only based on events, which most of the time, are sent by your Aggregates!
It seems that we have two sources of truth in this case, which shouldn't be the case theoretically.
Regarding your question about the source of truth, your Events should always be the source of truth. In the end, you should not update the repository other than using #EventHandlers.
My use case is very simple. I want to create a chain where peers can store some public data. What is the best way to accomplish that in Substrate?
I think I should implement a custom run time for that, but I'm not sure how to create a transaction sending data. I didn't found anything on that.
You may be looking for something like the system module's remark transaction.
It allows users to submit arbitrary pieces of data and have them attested to by the blockchain. That feature is available in any Substrate based blockchain including the node template.
A good place to start learning how to build custom runtimes and explore your idea more is the Proof of Existence tutorial.
I would like to know when do I need to use the rollback?
I understood the rollback is something to revert the DB structure, but (I think) it affects my local DB only.
If something is wrong in the structure, then is better to create another migration and pull to all team.
It's hard to say when do you need to rollback. It depends on individual needs and circumstances. Laravel provides a way to rollback if someone needs to do it.
Sometimes changes in your database may break your existing functionality and you may need to rollback to get the previous state of your application or maybe you just want to exclude some functionality from your application and that's why you may need to rollback the changes you have made to your database but it's not a reason that this option is available, instead it's an option and you are not forced to do it.
Cricket players needs to lift some weight, it's a part of their exercise, even Footballers do it but they don't really need to lift any weight when they play on the field then why should they lift weight?
All options are not for everybody but the framework provides the functionalities to make it complete, that' it. if you ever need to use one option then you may use it otherwise leave it.
On a maven project, on process-test-resources phase I set up the database schemas with sql-maven-plugin. On this project that are N database shards which I set up with N repeated with exactly the same content bar the database name. Everything works as expected.
Problem here is that with a growing number of shards the number of similar blocks grows, which is cumbersome and makes maintenance annoying (since, per definition, all of those databases are literally the same). I would like to be able to define a "list" of database names and let sql-maven-plugin run once for each, without having to define the whole block many times.
I'm not looking for changes in the test setup as I positively want to setup as many shards as needed on the test environment. I need solely some "maven sugar" for conveniently define the over which values the executions should "loop".
I understand that maven itself does not support iteration by itself and am looking for alternatives or ideas of how to better achieve this. Things that come to my mind are:
Using/writing a "loop" plugin that manages the multiple parameterized executions
Extending sql-maven-plugin to support my use case
???
Does anyone has a better/cleaner solution?
Thanks in advance.
In this case i would recommend to use the maven-antrun-plugin to handle this situation, but of course it also possible to implement a particular maven plugin for this kind of purpose.
I'm not sure where I should implement the caching in my repository pattern.
Should I implement it in the service-logic or in the repository?
GUI -> BusinessLogic (Services) -> DataAccess (Repositories)
It's a good idea not to put the caching logic directly into your repository, as that violates the Single Responsibility Principle (SRP) and Separation of Concerns. SRP essentially states that your classes should only have one reason to change. If you conflate the concerns of data access and caching policy in the same class, then if either of these needs to change you'll need to touch the class. You'll also probably find that you're violating the DRY principle, since it's easy to have caching logic spread out among many different repository methods, and if any of it needs to change, you end up having to change many methods.
The better approach is to use the Proxy or Strategy pattern to apply the caching logic in a separate type, for instance a CachedRepository, which then uses the actual db-centric repository as needed when the cache is empty. I've written two articles which demonstrate how to implement this using .NET/C#, which you will find on my blog, here:
http://ardalis.com/introducing-the-cachedrepository-pattern
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
If you prefer video, I also describe the pattern in the Proxy Design Pattern on Pluralsight, here:
https://app.pluralsight.com/library/courses/patterns-library/table-of-contents
I would handle it in the repository/data access layer. The reasoning is because it isn't up to the business layer on where to get the data from, that is the job of the repository. The repository will then decide where to get the data from, the cache (if it's not too old) or from the live data source based on the circumstances of the data access logic.
It's a data access concern more than a business logic issue.