Not seeing people use interfaces and #Transactional in their Spring Applications - spring

I have been seeing this trend recently with some Spring Framework developers. When they create their Service classes they are not creating interfaces for them to implement and they are not using #Transactional in their service classes when they are definitely in need of transactions.
Is there a good reason for this?
And one more question, Spring Boot has Session In View set to be true. Why? I always thought that to be a really bad design, allowing for developers to be lazy and allow for N+1 queries to happen all the time and slowing down performance. If you know you are going to need it for the UI, why not query for it in the Service-Repository classes in the least amount of queries instead?

It's true that building Services with interfaces and class implementations has been under questioning lately. There a classic post about this: https://octoperf.com/blog/2016/10/27/impl-classes-are-evil/ .
About Transactions, for read operations the readOnly=true parameter should be included in the #Transactional annotation to make sure it is done properly. As for the write operations, of course the transaction management should be done with care and the #Transactional annotation is there to make sure it is done so. Not using it is certainly a code smell, since a transaction will happen nonetheless, but you just won't have control over it.
Session In View set to true is a code smell too, and it should be used in learning environments or short-term experiments only. Batching, among other possibilities, can and should be used to avoid the N+1 problem.

Related

How expensive are transactions in Grails?

I'm looking at performance issues with a Grails application, and the suggestion is to remove the transactions from the services.
Is there a way that I can measure the change in the service?
Is there a place that has data on how expensive transactions are? [Time and resource-wise]
If someone told you that removing transactions from your services was a good way to help performance, you should not listen to any future advice from that person. You should look at the time spent in transactions and determine what the real overhead is, and find methods and entire services that are run in transactions but don't need to be and fix those to be nontransactional. But removing all transactions would be irresponsible.
You would be intentionally adding sporadic errors in method return values and making your data inconsistent, and this will get worse when you have a lot of traffic. A somewhat faster but buggy app or web site is not going to be popular, and if this doesn't help performance (or not much) then you still have to do the real work of finding the bottlenecks, missing indexes, and other things that are genuinely causing problems.
I would remove all #Transactional annotations and database writes from all controllers though; not for performance reasons, but to keep the application tiers sensible and not polluted with unrelated code and logic.
If you find one or more service methods that don't require transactions, switch to annotating each transactional method as needed but omit the annotation at class scope so un-annotated methods inherit nothing and aren't transactional. You could also move those methods to non-transactional services.
Note that services are only non-transactional if there are no #Transactional annotations and there is a transactional property disabling the feature:
static transactional = false
If you don't have that property and have no annotations, it will look like it's ok, but transactional defaults to true if not specified.
There's also something else that can help a lot (and already does). The dataSource bean is actually a proxy of a proxy - one proxy returns the connection from the pool that's a being used by an open Hibernate session or transaction so you can see uncommitted data and do your queries and updates in the same connection. The other is more related to your question: org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy which has been in Spring for years but only used in Grails since 2.3. It helps with methods that start or participate in a transaction but do no database work. For the case of a single method call that unnecessarily starts and commits an 'empty' transaction, the overhead involved includes getting the pooled connection, then calling set autocommit false, setting the transaction isolation level, etc. All of these are small costs but they add up. The class works by giving you a proxied connection that caches these method calls, and only gets a real connection and invokes these method on it when a query is actually run. If there are no queries and the only calls are those transaction-related setup methods, there's basically no cost at all. You shouldn't rely on this and should be intentional with the use of #Transactional annotations, but if you miss one this pool proxy will help avoid unnecessary work.

Usecase of Spring DAO

I'm wondering what is the typical usecase of Spring DAO where we can easily switch between different persistence frameworks.
Apart from abstracting the boiler-plate code (for JDBC, Hibernate like) Why does any application want to change its ORM frameworks so frequently?
By using a DAO pattern with a distinct DAO interface, this enables you to mock the DAO implementation. With this you improve testability of your code, as you are then able to write tests that do not need database access.
It is not only about frequently being able to switch between ORM frameworks, but is also about reducing effort if you are enforced to change the ORM.
Another reason is, that you might have different data sources like a database, a webservice or the file system for example. In this case you don't abstract the ORM but simply the persistence mechanism in general.
I think the real important idea behind DAOs are that you have just one spot where all data access related code for a particular entity ist located. That makes testing and refactoring of your persistence layer easier and your code is better readable.
Furthermore, it makes the code better readable. Think of a new developer in your team that should implement a feature. If she needs to access the databasase she would look into the dao for data access methods.
If you scatter the data access code in different services the risk is pretty high that someone produces code duplicates.

Example use-cases for using Dependency Injection with the Play Framework

I am a big fan of Dependency Injection and the Play Framework, but am having trouble seeing how the two could be exploited together.
There are modules for Spring and Guice, but the way that Play works makes it hard for me to see how DI could be beneficial beyond some quite simple cases.
A good example of this is that Play expects JPA work to be done by static methods associated with the entity in question:
#Entity
Person extends Model {
public static void delete(long id) {
em().find(id).remove();
}
//etc
}
So there is no need for a PersonManager to be injected into controllers in the way it might for a Spring J2EE application. Instead a controller just calls Person.delete(x).
Obviously, DI is beneficial when there are interfaces with external systems, as the concrete implementation can be mocked for testing etc., but I don't see much benefit for a self-contained Play application.
Does anyone have any good examples? Does anyone use it to inject a Manager-style class into Controllers so that a number of operations can be done within the same transaction, for example?
I believe from this sentence you wrote:
"Does anyone have any good examples? Does anyone use it to inject a Manager-style class into Controllers so that a number of operations can be done within the same transaction, for example?"
that before answering the DI question I should note something: transactions are managed automatically by Play. If you check the model documentation you will see that a transaction is automatically created at the beginning of a request, and committed at the end. You can roll it back via JPA or it will be rolled back if an exception is raised.
I mention this because from the wording of your sentence I'm not sure if you are aware of this.
Now, on DI itself, in my (not-so-extensive) experience with DI, I've seen it used mainly to:
Load the ORM (Hibernate) factory/manager
Load Service classes/DAOs into another class to work with them
Sure, there are more scenarios, but these probably cover most of the real usage. Now:
The first one is irrelevant to Play as you get access to your JPA object and transaction automatically
The second one is quite irrelevant too as you mainly work with static methods in controllers. You may have some helper classes that need to be instantiated, and some may even belong to a hierarchy (common interface) so DI would be beneficial. But you could just as well create your won factory class and get rid of the jars of DI.
There is another matter to consider here: I'm not so sure about Guice, but Spring is not only DI, it also provides a lot of extra functionalities which depend on the DI module. So maybe you don't want to use DI in Play, but you want to take advantage of the Spring tools and they will use DI, albeit indirectly (via xml configuration).
The problem in my humble opinion on the static initialization approach of Play! is that it makes testing harder. Once you approach the HTTP vs Object Orientation problem with static members and objects that carries the HTTP message data (request and response) you make a trade of having to create new instances for each request by the ability of make your objects loosely coupled with the rest of your project classes.
One good example of a different design are servlets, it also extends a base class but it approaches the problem with a single instance creation (by default, because there are configurations that enable more instances).
I believe that maybe a mix of the two approaches would be better, having a singleton of each controller would give the same characteristics of a full static class and would allow dependency injection of some kinds of object. But not the objects with request or session scope, once the controller would need to be created every new request. Moreover it would improve testability by inverting the control of dependency injection, thus allowing arbitrary injection points.
Dependencies would be injected by the container or by a test, probably using mocks for the heavy stuff that much likely would already have been tested before.
In my point of view, this static model pushes the developer away from testing controllers because extending FunctionalTest starts the application server, thus paying the price of heavy objects like repositories, services, crawlers, http clients, etc. I don't want to wait a lot of objects to be bootstrapped just to check if some code was executed on the controller, tests should be quick and clear to make developers love them as their programming assistant/guide.
DI is not the ultimate solution to use everywhere... Don't use DI just because you have it in your hands... In play, you don't need DI to develop controllers/models etc... but sometimes it could be a nice design: IMO, you could use it if you have a service with a well know interface but you would like to develop this service outside Play and test it outside play and even test your play project with just a dummy service in order NOT to depend on the full service implementation. Therefore DI can be interesting: you plug the service loosely in play. In fact, this is the original use case for DI afaik...
I just wrote a blog post about setting up a Play Framework application with Google Guice. http://geeks.aretotally.in/dependency-injection-with-play-framework-and-google-guice
I see some benefits, especially when a component of your application requires a different behavior based on a certain context or something like that. But I def believe people should be selective about what goes into a DI context.
It shows again that you should only use dependencies injection if you really have a benefit. If you have complex services it's useful, but in many cases it's not. Read the chapter about models in the play-documentation.
So to give you an example where you can use DI with play. Perhaps you must make a complex calculation, or you create a pdf with a report-engine. There I think DI can be useful, specially for testing. There I think the guice-module and spring-module are useful and can help you.
Niels
As of a year and some change later, Play 2.1 now has support for dependency injection in controllers. Here's their demo project using Spring 3, which lays it out pretty clearly.
Edit: here's another example using Guice and Scala, if that's your poison.

AOP x IoC for caching

Do you prefer the clean approach of an AOP cache layer on top of your methods (any DAO or service method) OR do you prefer the total control approach of injecting a cache instance wherever you need?
I understand AOP gives you loose coupling and separation of concerns, but not so much flexibility, unless you are coding the method interceptors yourself.
I tend to like the IoC approach, because a cache instance can be easily mocked if you need to and with an instance of the cache you have total control and flexibility.
It is like logging. Who actually uses AOP for application wide logging?
This is the question about should we use "real" AOP at all (by "real" I mean quantification and obliviousness, so that you can enable aspects globally and 100% transparently for the system). My opinion is to avoid that 100% transparent solutions every time you can. If you're at the development stage, it will be better to design your system in the way that don't force you to do tricks like AOP weaving at intermediate code level.
At the other hand, it will be quite disturbing to write caching concern for every component/interface you need, no matter how you'll do it. The most obvious way which came to my mind is to have a caching Decorator class for each cached class - one thing done many times.
So the path I will try to follow is to have an idea of AOP done using some IoC extension based on dynamic proxying. When fetching an object from IoC, container can check in its configuration should the cache be applied for given type and if so, create an interface-based dynamic proxy with caching. This solution is not forcing you to write a lot of similar code and is less confusing than "real" AOP by its visibility in the source code. There are some implementations like this, but you haven't specified any language, so I can't offer any.

Inversion of Control, Spring Framework - system of global instances

Is inversion of control essentially just retrieving a set of already instantiated objects? In theory. I guess more granular details as implemented by IoC frameworks like Spring have a lot more functionality, but in theory it seems like IoC containers operate like a collection of instantiated beans (in the Java world) and then you get access to those beans. Almost like you would with a collection Singleton objects?
It's partly getting hold of singletons in practice, yes. Some beans will be instantiated multiple times, whenever they're needed (depending on the configuration), but often you can make do with single instances - particularly if they're stateless once configured. I like the idea of data flowing "through" an application's plumbing after it's been properly hooked up.
The benefit is that the "singletoneity" is only present in the configuration, not in the code, which makes the system more testable and flexible. The difference in terms of how you view (and expose) the dependencies with your app is huge.
Although the answer has already been accepted, I will elaborate a little more:
Initially spring was mostly about singleton management. with the introduction of custom scopes came the web specific scopes and the ability to create your own custom scopes. Leaning on AOP features this also allows you to "stay singleton" for as long as possible, because it uses a technique known as scope proxying. This can let you introduce a scoped object right in the middle of a chain of singletons - a feature you'd often be using threadlocals for.
So I'd say it's about tight control of instance creation, to make sure everything is done only the required number of times, and preferably only the construction that is necessary is done for each request. Singleton management was the old days.

Resources