I'm developing a file system which encapsulates access to an Amazon S3 bucket using the fuse library and jnr-fuse (https://github.com/SerCeMan/jnr-fuse) as a java binding.
I have a first version working and am currently doing some code-cleanup and refactoring work, trying to get everything into a proper multi-layered architecture.
So far I have roughly the following:
Frontend: This is the actual implementation of the FuseFileSystem interface from jnr-fuse. It has some dependencies to jnr (native) types and the methods are the java equivalents of fuse's c-functions.
Service Layer: One interface that has "non-native-dependent" versions of all the file-system methods from the frontend layer, but no dependencies to jnr or fuse whatsoever. The idea is that this could be used in other contexts as well (e.g. as the core component of an implementation of the java.nio.FileSystem-API for S3 or any other scenario where someone would need an API making S3 accessible in a "filesystem-ish" fashion but not want to do that via fuse and therefore not want all the jnr dependencies)
Where I'm currently struggling is the persistence layer: As all communication with S3 is actually done via http, I'm doing some fair amount of caching to reduce traffic and increase performance.
The question is where that caching would best fit..
Obviously the actual DAOs should not be polluted with any kind of caching/locking logic - they should only handle the actual access to the data (i.e. doing the http calls against S3).
On the other hand, also the service layer shouldn't really be concerned with caching (in case the persistence layer and with it the caching requirements change), so I was thinking of doing one of the following:
Use a "doubled" persistence layer: Each DAO is implemented twice: One version that holds a cache and serves data out of it's cache. If an object is not in the cache, we delegate to the second DAO which actually fetches the object (which is then added to the cache)
Introduce a separate "cache" layer with slightly different interfaces than the actual persistence layer that handles all caching requirements and delegates to the persistence layer as necessary.
Version 1 would be the cleaner one from the service-layer point of view - there wouldn't be any difference between using a cache and not using it because all calls to the persistence layer would go against the same interface. On the other hand it would also transfer all the logic concerning the state or "lifecycle" of a file (open -> read/write -> close) to the persistence layer.
Version 2 would manage the lifecycle of the file inside the "cache" layer, which I think would make the whole thing easier to understand for anyone new to the code. On the other hand it also assumes that there will always be a cache layer (which is probably true).
Are there any other pros and cons to the above approaches from a design point of view or is there any pattern out there which addresses this kind of problem?
Right now I'd rather go with option 2 but it'd be really interesting to hear some opinions.
Why are you opposed to caching in the DAO? This has always been the perfect place for me to cache. It is a data access concern, and thus goes into the data access layer. A couple of times I've used various AOP implementations for convenience, but 90% of the time, i'm implementing caching logic inside the DAO.
The cache itself does not live in the DAO, it is usually it's own interface, so I can swap between implementations (in-memory, on-disk, etc).
I have also had some luck when using Apache HTTP client's own built-in caching. It allows you to respect HTTP cache semantics, or override it with custom logic.
Related
I have 3 micorservices one that serves request from UI and the other that serves request from public apis and the third which does some data processing and storing the data provided from the kafka topic by UI/public.
I have written common service and dao jar for the services, as the data is coming from the common data source.
If I dont have common service/dao then lot of code will be duplicated.
I am now feeling that this is causing coupling between the services.
Is it the right design?
Using a common DAO across microservices is right if it is making development faster and easier to understand for everyone, and wrong if it's not. You are right that this is creating some coupling between the services, but it's coupling that you could easily do away with if the DAOs for the services began to diverge. Since the final shared package will be inside each service's runtime, there would be zero issues introduced if one of the other services decided to stop using the DAO and use a different one.
That being said, you may have a larger coupling issue if all three services are using this DAO to connect to a shared database. If each is dependent on the same tables/schema, it makes it very hard for one service to diverge from the others and make independent schema changes without impacting the others.
I am using spring boot, and it's very easy to integrate spring cache with other cache component.
By caching data, we can use #Cachable annotation, but still we need configure and add cacheName to the cacheManager, without this step, we will get an exception while accessing the method:
java.lang.IllegalArgumentException: Cannot find cache named 'xxxx' for Builder
My question is, is that able to disable the cache instead of raising the error if we not configure the cacheName? I raised this because spring cache provide a configuration spring.cache.cacheNames in CacheProperties.
Not sure if the condition attribute in #Cachable works for this.
Any idea is appreciate!! Thanks in advance!
It really depends on your "caching provider" and the implementation of the CacheManager interface, in particular. Since Spring's Cache Abstraction is just that, an "abstraction" it allows you to plugin different providers and backend data stores to support the caches required by your application (i.e. as determined by Spring's caching annotations, or alternatively, the JSR-107-JCache annotations; see here).
For instance, if you were to use the Spring Framework's provided ConcurrentMapCacheManager implementation (not recommended for production except for really simple UCs), then if you choose to not declare your caches at configuration/initialization time (using the default, no-arg constructor) then the "Caches" are lazily created. However, if you do declare your "Caches" at configuration/initialization time (using the constructor accepting cache name arguments), then if your application uses a cache (e.g. #Cacheable("NonExistingCache")) not explicitly declared, then Exception would be thrown because the getCache(name:String):Cache method would return null and the CacheInterceptor initialization logic would throw an IllegalArgumentException for no Cache available for the caching operation (follow from the CacheIntercepter down, here, here, here, here and then here).
There is no way to disable this initialization check (i.e. throw Exception) for non-existing caches, currently. The best you can do is, like the ConcurrentMapCacheManager implementation, lazily create Caches. However, this heavily depends on your caching provider implementation. Obviously, some cache providers are more sophisticated than others and creating a Cache on the fly (i.e. lazily) is perhaps more expensive and costly, so is not supported by the caching provider, or not recommended.
Still, you could work around this limitation by wrapping any CacheManager implementation (of your choice), and delegate to the underlying implementation for "existing" Caches and "safely" handle "non-existing" Caches by treating it as a Cache miss simply by providing some simple wrapper implementations of the core Spring CacheManager and Cache interfaces.
Here is an example integration test class that demonstrates your current problem. Note the test/assertions for non-existing Caches.
Then, here is an example integration test class that demonstrates how to effectively disable caching for non-existing Caches (not provided by the caching provider). Once again, note the test/assertions for safely accessing non-existing Caches.
This is made possible by the wrapper delegate for CacheManager (which wraps and delegates to an existing caching provider, which in this case is just the ConcurrentMapCacheManager again (see here), but would work for any caching provider supported by Spring Cache Abstraction) along with the NoOpNamedCache implementation of the Spring Cache interface. This no-op Cache instance could be a Singleton and reused for all non-existing Caches if you did not care about the name. But, it would give you a degree of visibility into which "named" Caches are not configured with an actual Cache since this most likely will have an impact on your services (i.e. service methods without caching enabled because the "named" cache does not exist).
Anyway, this may not be one you exactly want, and I would even caution you to take special care if you pushed this to production since (I'd argue) it really ought to fail fast for missing Caches, but this does achieve what you want.
Clearly, it is configurable and you could make it conditional based on cache name or other criteria, in that, if your really don't care or don't want caching on certain service methods in certain contexts, then it is up to you and this approach is flexible and completely give you that choice, if needed.
Hope this gives you some ideas.
Our project is designed in EJB 2.0.
We are not using any kind of EJB persistance methods in the BMP EntityBeans. In SessionBeans we are getting reference to EntityHome object by using method getEJBXXXXHome() method and there by calling home.findByPrimaryKey("") method to get the EJB reference. Then we are calling actual methods for CRUD operations. In CRUD operations methods our people have used normal JDBC API methods.
Now we are migrating to EJB3. As part of migration from EJB 2.0 TO EJB3 am converting all my BMP EntityBeans to normal Java classes i.e there are no more entitybeans. If EJB container maintains a pool for the entitybeans earlier, it wont be there now. Its working normally when I have tested in my local machine for one transaction
My concern is, will it affects the perfromance for multiple threads in production?.
Afer changing the code now, every call creates one EntityBean Object. If 60k calls were made in just one hour will that affect my server. How this one is handled previously in EJB 2.0? is there any way to handle it in the changed code (i.e for normal java classes as they are no more entitybeans concept)
Generally speaking, the overhead of objection creation/collection is going to be lower than the overhead of whatever the EJB container was doing for your entities previously. I suspect a larger concern than object creation overhead is round-trips to the database. Depending on your EJB container configuration, it's likely the container was optimizing the JDBC SQL and possibly caching the retrieved data (unrelated to object caching). You should likely design your application to minimize calls to the database and ensure you don't execute unnecessary queries.
Ultimately, I suspect only you are going to be able to assess the performance of your application on your application server on your hardware. I recommend following good programming practices to avoid egregious overhead, profile the result, and optimize from there rather than worrying about the performance up-front.
Folks,
Apologies if this has been covered in another thread, but I have searched ddd and mvc articles and have not found a straightforward answer.
I am hoping to apply a DDD approach to the architecture of my MVC projects. Please correct me where I am wrong.
All MVC controller actions that involve hitting the domain model will initially hit
and application service layer.
The application service layer here acts as a facade between presentation and the domain.
Any requests from the application service later that clearly involve discrete domain aggregates will perform fetch or modify operations on aggregate roots using repositories. Each aggregate root will have its own repository.
so the application service layer must be injected with any/all repositories required by the domain.
Where an operation may involve multiple aggregates or requires logic that does not fit neatly into one aggregate, the application service will call a domain service to carry out operations across aggregates.
This does not seem right to me.
My confusion is that from a DDD perspective Im not sure whether for example aggregate roots should perform their own persistance i.e. the aggregate gets injected with a repository and then persists/fetches itself or whether as above the application service layer uses repositories to act on or fetch aggregates?
Also if the application service layer is injected with all repositories, does the domain service that the application service layer calls also need repositories injected?
Im keeping CQRS out of this at this point. I want to get the layering and the relationship between services and aggregates sorted out first.
Thanks for any advice.
All MVC controller actions that involve hitting the domain model will
initially hit and application service layer. The application service layer here acts as a facade between presentation and the domain.
There's debate over that but I would consider carefully whether that additional layer is needed or not. It adds a lot of boilerplate code and degrades maintainability - as someone pointed out recently, when you separate things that change for the same reasons (ie your service methods and the corresponding domain methods), you have to make changes in many different places in the system.
On the other hand, you could need that service layer to map your domain objects to DTOs but there again, it could be done directly in the Controller and nothing forces you to use DTOs in the presentation layer.
My confusion is that from a DDD perspective Im not sure whether for
example aggregate roots should perform their own persistance i.e. the
aggregate gets injected with a repository and then persists/fetches
itself or whether as above the application service layer uses
repositories to act on or fetch aggregates?
It's usually considered bad practice to have aggregate roots manage their own persistence because it breaks persistence ignorance and violates the Single Responsibility Principle. If you do that, your aggregate root class now has 2 reasons to change, 2 reasons to break the code, it is less maintainable, etc.
You should instead delegate the responsibility of saving the aggregate root in its repository to an object that will be aware of the application execution context (for instance, a Controller or an object in the Application layer).
Also if the application service layer is injected with all
repositories, does the domain service that the application service
layer calls also need repositories injected?
Yes, I think it pretty much makes sense especially if the domain service heavily relies on the repository.
What are the advantages and disadvantages of the Session Façade Core J2EE Pattern?
What are the assumptions behind it?
Are these assumptions valid in a particular environment?
Session Facade is a fantastic pattern - it is really a specific version of the Business Facade pattern. The idea is to tie up business functionality into discrete bundles - such as TransferMoney(), Withdraw(), Deposit()... So that your UI code is accessing things in terms of business operations instead of low level data access or other details that it shouldn't have to be concerned with.
Specifically with the Session Facade - you use a Session EJB to act as the business facade - which is nice cause then you can take advantage of all the J2EE services (authentication/authorization, transactions, etc)...
Hope that helps...
The main advantage of the Session Facade pattern is that you can divide up a J2EE application into logical groups by business functionality. A Session Facade will be called by a POJO from the UI (i.e. a Business Delegate), and have references to appropriate Data Access Objects. E.g. a PersonSessionFacade would be called by the PersonBusinessDelegate and then it could call the PersonDAO. The methods on the PersonSessionFacade will, at the very least, follow the CRUD pattern (Create, Retrieve, Update and Delete).
Typically, most Session Facades are implemented as stateless session EJBs. Or if you're in Spring land using AOP for transactions, you can create a service POJO that which can be all the join points for your transaction manager.
Another advantage of the SessionFacade pattern is that any J2EE developer with a modicum of experience will immediately understand you.
Disadvantages of the SessionFacade pattern: it assumes a specific enterprise architecture that is constrained by the limits of the J2EE 1.4 specification (see Rod Johnson's books for these criticisms). The most damaging disadvantage is that it is more complicated than necessary. In most enterprise web applications, you'll need a servlet container, and most of the stress in a web application will be at the tier that handles HttpRequests or database access. Consequently, it doesn't seem worthwhile to deploy the servlet container in a separate process space from the EJB container. I.e. remote calls to EJBs create more pain than gain.
Rod Johnson claims that the main reason you'd want to use a Session Facade is if you're doing container managed transactions - which aren't necessary with more modern frameworks (like Spring.)
He says that if you have business logic - put it in the POJO. (Which I agree with - I think its a more object-oriented approach - rather than implementing a session EJB.)
http://forum.springframework.org/showthread.php?t=18155
Happy to hear contrasting arguments.
It seems that whenever you talk about anything J2EE related - there are always a whole bunch of assumptions behind the scenes - which people assume one way or the other - which then leads to confusion. (I probably could have made the question clearer too.)
Assuming (a) we want to use container managed transactions in a strict sense through the EJB specification then
Session facades are a good idea - because they abstract away the low-level database transactions to be able to provide higher level application transaction management.
Assuming (b) that you mean the general architectural concept of the session façade - then
Decoupling services and consumers and providing a friendly interface over the top of this is a good idea. Computer science has solved lots of problems by 'adding an additional layer of indirection'.
Rod Johnson writes "SLSBs with remote interfaces provide a very good solution for distributed applications built over RMI. However, this is a minority requirement. Experience has shown that we don't want to use distributed architecture unless forced to by requirements. We can still service remote clients if necessary by implementing a remoting façade on top of a good co-located object model." (Johnson, R "J2EE Development without EJB" p119.)
Assuming (c) that you consider the EJB specification (and in particular the session façade component) to be a blight on the landscape of good design then:
Rod Johnson writes
"In general, there are not many reasons you would use a local SLSB at all in a Spring application, as Spring provides more capable declarative transaction management than EJB, and CMT is normally the main motivation for using local SLSBs. So you might not need th EJB layer at all. " http://forum.springframework.org/showthread.php?t=18155
In an environment where performance and scalability of the web server are the primary concerns - and cost is an issue - then the session facade architecture looks less attractive - it can be simpler to talk directly to the datbase (although this is more about tiering.)