I have a multi module application, consisting of an interface in module A and an implementation in module B. Module A is linked to module B using dependency inversion where module B is injected into module A via #Autowiring. The main application will implement both module A and B and thus will not have any error. However, if module A is running on its own, it will not be able to find the implementation and throw an error.
May I seek your advice how should we best implement dependency inversion to follow CLEAN architecture and also to allow module A to run on its own?
Hope this diagram gives a better illustration of our problem
Related
I have a doubt about which solution is better to share utilities between maven modules.
The problem is that we have 2 maven modules for 2 api type modules and another "shared" module with logic used by those two modules.
In this "shared" module there are also utility classes for tests, for example, an abstract class "MockMvcTest.java" that are used in the other modules to not repeat code. All classes are shared directly, are not declared within test packages, and are not shared as tests. That is, in this "shared" module test dependencies, such as "junit-vintage-engine", cannot have "test" scope.
This "shared" module ends up containing packages with test utilities, security, mappers, etc.
I have doubts about whether it is okay to mix "normal" and "test" logic in this "shared" module and share it with the other modules or would be better to create another module?
Thank you very much in advance
I'm working on a GWT project but stuck with a situation where I have to create 2 different modules with separate eventBus in the same project. One module should be child of another i.e. there should be some way to invoke or load child module from a main module eventBus.
I tried it with by looking at this link https://code.google.com/archive/p/mvp4g/wikis/MultiModules.wiki#Before,_After but giving some differed binding exceptions.
Can someone please elaborate the detailed procedure to get the solution.?
I've been an AEM developer for almost a year now. I know AEM uses 'Declarative Services component framework' to manage life cycle of OSGi components.
Consider a scenario when i would export a package from a bundle and import that package from another bundle, i could create objects of classes in first bundle inside second bundle as well. it's a import-export contract in this case.
my question is when i should be using component framework to manage the lifecycle of my objects and when to handle it myself by creating them when required.
In an ideal design, you would NOT in fact be able to create objects from the exported package; because that package would contain only interfaces. This makes it a "pure" contract (API) export. If there are classes in there that you can directly instantiate, then they are implementation classes.
In general it is far better to export only pure APIs and to keep implementation classes hidden. There are two main reasons:
Implementation classes tend to have downstream dependencies. If you depend directly from implementation class to implementation class then you get a very large and fragile dependency graph... and eventually that graph will contain a cycle. In fact it's almost inevitable that it will. At that point, your application is not modular because you cannot deploy or change any part of it independently.
Pure interfaces can be analysed for compatibility between versions. As a consumer or a provider of an API, you know exactly which versions of the API you can support because the API does not contain executable code. However if you have a dependency onto an implementation class, then you never really know when they break compatibility because the breakage could happen deep down in executable code that you can't easily analyse.
If your objects are services then there's no question, they have to be OSGi components.
For other things, my first choice is OSGi components, unless they're trivial objects like data holders or something similar.
If an object requires configuration or refers to OSGi services then it's also clearly an OSGi component.
In general, it's best IMO to think in services and define your package exports as the minimum that allows other bundles to use a bundle's services. Unless a bundle is clearly a reusable library like commons-io (to take a simple example).
I am using DirectoryModuleCalatog to load the modules.
What I am trying to implement is all the modules needs to be dependent on some specific module. For example, I have one MainModule and several orher modules, what I want is all my modules are dependent on MainModule.
We can do this by specifying ModuleDependency attribute, but my requirement is even if module don't have this attribute, the dependency can be set through code.
I have checked various forims and found that this can be achieved if I am populate ModuleCatalog direct from code. I can Implement this by directly traversing the modules location but not sure how it could impact on performance if number of modules are more (say 50+ or 100+).
Is it possible to set the module dependency if catalog is populated using DirectoryModuleCatalog?
We are writing a new set of services and have decided to make them share a common interface... calling it a BaseService. The idea is that whenever anyone wants to develop a new service in our organization, they should be just able to extend and use this BaseService.
We have written a few other classes which also form a part of this base jar, it does things like handle transactions and connect to database using hibernate etc.
Right now all the services that extend the BaseService are a part of the same project (Eclipse + Maven), and some of the services are dependent on each other, but because they are in the same project we don't have a problem with dependencies.However, we expect 40-50 services to be written which would extend base service and would also be interdependent.
I am worried that the size of the base project would be huge and that just because when someone has to use one service they might have to depend on my base jar which has 50 services.
Is there a way that we can make some projects dynamically dependent on others?
Lets say I have a service A which depends on service B, when I build/compile Service A,it should be able to realize that it has a dependency on service B and automatically use the Service B jar.
I have heard of OSGi, will it solve my problem or is there a way I can do it with Maven or is there a simpler solution ?
Sorry about the long post !
Thanks in advance for your suggestions
It doesn't make any sense to "dynamically" manage project dependencies, since build dependencies are by definition not dynamic.
Your question, at least for the moment, seems to be entirely about how to build your code rather than about how to run it. If you are interested in creating a runtime system in which new service implementations can be dynamically added then OSGi may be the solution to look at. An extra advantage here would be that you could enforce the separation of API from implementation, and prevent the implementing services from invalidly depending on parts of your core module that you do not want them to have visibility of.
You could also use OSGi to manage evolution of your core service API through versioning; for example how do you communicate the fact that a non-breaking change has been made to the API versus a breaking change etc.
I would say there are two options depending if i understand your question correct. First one. You have already defined an interface (java term) and now you have different implementations of that. The simple solution for Maven would be to a have a module which is called for example: service-api and than this will be released and can be used by others as dependencies. On their side they simply implement the interface. No problem with the dependencies. If you are more talking about OSGi than you should take a look to maven-tycho.