Testability in AOP - spring

I'm totally new in AOP world.
After reading some books(ex, spring in action), I have a basic question for AOP -- How can we ensure AOP is still working or not working in existing system especially when the system is changing.
You know in real world, especially when the program is still under implementation, the design is always changing. It might be the big program structure changing, might be the refactor. In a word, the program is always changing.
However, the way we adopt AOP in our program is depend on the implementation detail. (ex, the public/private method name) If the implementation detail is changed, possibly the AOP will not work. What's worse, if we're working in a big team, possibly someone will change some implementation detail on which AOP depends, but he is not aware at all. Then the big question is how we can know the AOP is not working in such case? Is there any compilation guarantee or at least junit case guarantee?
Thanks a lot for answer.

This can be tested with Junit and spring-test, a Junit test class can be written that get's injected with an object from the layer at which your aspect is being applied and mock the layer beneath it.
Then call the object to which the aspect is applied and assert that the results are as expected.
This is an example of how to test if method security (applied via aspects) is working.
The test confirms that an AccessDeniedException is thrown as expected, proving that the aspect is working.
Other aspects could be tested in a similar way using spring-test and mocks.

The only way to really test this is with some automated testing, otherwise, as you have pointed out, people could change implementation detail and break the existing AOP. Spring provides integration testing support that will load the beans from your contianer (and also load your AOP). http://docs.spring.io/spring/docs/3.0.0.M3/reference/html/ch10s03.html

Related

Not seeing people use interfaces and #Transactional in their Spring Applications

I have been seeing this trend recently with some Spring Framework developers. When they create their Service classes they are not creating interfaces for them to implement and they are not using #Transactional in their service classes when they are definitely in need of transactions.
Is there a good reason for this?
And one more question, Spring Boot has Session In View set to be true. Why? I always thought that to be a really bad design, allowing for developers to be lazy and allow for N+1 queries to happen all the time and slowing down performance. If you know you are going to need it for the UI, why not query for it in the Service-Repository classes in the least amount of queries instead?
It's true that building Services with interfaces and class implementations has been under questioning lately. There a classic post about this: https://octoperf.com/blog/2016/10/27/impl-classes-are-evil/ .
About Transactions, for read operations the readOnly=true parameter should be included in the #Transactional annotation to make sure it is done properly. As for the write operations, of course the transaction management should be done with care and the #Transactional annotation is there to make sure it is done so. Not using it is certainly a code smell, since a transaction will happen nonetheless, but you just won't have control over it.
Session In View set to true is a code smell too, and it should be used in learning environments or short-term experiments only. Batching, among other possibilities, can and should be used to avoid the N+1 problem.

Is Java Spring really better than straight up Java programming

I have read that dependency injection is good for testing, in that a class can be tested without its dependencies, but the question comes to my mind if Class A depends on Class B or C or any class, testing Class A independent of some class is yielding a test result of zero, not a failed or past test.
Class A was created to do something and if it is not fed anything whether using new key word or setting up the extra files in Spring, Class A won't do any work.
About the idea of making code modular, readable and maintainable: so business classes became cleaner, but all we did was shift confusion from dirty Java business classes to convoluted XML files and having to delete interfaces used to inject to our loosened objects.
In short, it seems we have to make edits and changes to a file somewhere,right?
Please feel free to put me in my place if my understanding is lacking, just a little irritated with learning Spring because I see the same amount of work just rearranged.
Dependency injection is good for unit testing because you can individually test each method without that method depending on anything else. That way each unit test can test exactly one method.
I would say that if the xml is what’s annoying you check out Spring boot. It’s based on a java configuration so no xml and it simplifies a lot of configuration for you based on your class path. When I first started spring I found the xml very daunting coming from a java background but the annotation based configuration and the auto configuring done by spring boot is extremely helpful for quickly getting applications working.
IMO biggest advantage of using the spring is dependency injection which makes your life easy. For example if you would like to create a new service with three dependencies, then you can create a class very easily using Spring. But without spring, you will end up writing different factory methods which will return you the instances you are looking for. This makes your code very verbose with static method calls. You may want to take a look at the code repositories before spring era.
Again if you would like to use Spring or not is your personal call based on project complexity. But it's other features/advantages cant be overlooked.
And XML files or Java configs are the ways of achieving spring configuration - where you would like to add your business logic is personal flavour. Only thing is you should be consistent all across your project.
I would suggest that you read Martin Fowler's great article on Inversion of Control and Dependency Injection to gain a better understanding of why frameworks like Spring can be really useful to solve a well known set of common dependency injection problems when writing software.
As others have mentioned, there is no obligation to use Spring; and whatever you can do with Spring, you can probably do it by other means like abstract factories, factory methods, or service locators.
If your project is small enough, then you probably wouldn't mind solving the dependency injection issues on your own using some design patterns like those mentioned above. However, depending on the size of your project, many would prefer to use a framework or a library that already packs a bunch of solutions to these recurrent head scratchers.
In regards to the advantages of dependency injection frameworks when doing unit testing is the idea that you don't need to test the dependencies of your class, but only your class.
For example, most likely your application has a layered design. It is very common to have a data access class or a repository that you use to retrieve data from a datasource. Logically, you also have a class where you use that DAO.
Evidently, you already wrote unit testing for your DAO, and therefore, when you're testing your business class (where the DAO is being used) you don't care about testing your DAO again.
Fortunately, since Spring requires some form of dependency injection for your DAO, this means your class must provide a constructor or a setter method through which we can inject that DAO into our business class, right?
Well, then during unit testing of your business class, you can conveniently use those injection points to inject your own fake DAO (i.e. a mock object). That way, you can focus on the testing of your business class and forget about retesting the DAO again.
Now compare this idea with other solutions you may have done on your own:
You inject the dependency directly by instantiating the DAO within your business class.
You use a static factory method within your code to gain access to the DAO.
You use a static method from a service locator within your code to gain access to the DAO.
None of these solutions would make your code easy to test because there is no simple manner to get in the way of choosing exactly what dependency I want injected into my business class while testing it (e.g. how do you change the static factory method to use a fake DAO for testing purposes?).
So, in Spring, using XML configuration or annotations, you can easily have different dependencies being injected into your service object based on a number of conditions. For example, you may have some configurations for testing that evidently would be different than those used in production. And if you have a staging environment, you may even have different XML configurations of dependencies for your application depending on whether it is running in production or integration environments.
This pluggability of dependencies is the key winning factor here in my opinion.
So, as I was saying, my suggestion to you is that you first expand your understanding of what problems Spring core (and in general all dependency injection frameworks) is trying to solve and why it matters, and that will give you a broader perspective and understanding of these problems in a way that you could to determine when it is a good idea to use Spring and when it is not.

Good strategy for Spring Framework end-to-end testing

So this is a rather "big" question, but what I'm trying to accomplish is the following:
I have a Spring application, MVC, JDBC (MySQL) and JSP running on tomcat.
My objective is to test the entire "stack" using a proper method.
What I have so far is Junit using Selenium to simulate an actual user interacting with the application (requires a dummy account for that), and performing different validations such as, see if element is present in the page, see if the database has a specific value or if a value matches the database.
1st concern is that this is actually using the database so it's hard to test certain scenarios. I would really like to be able to mock the database. Have it emulate specific account configs, data states etc
2nd concern is that given the fact that I use what is in the database, and data is continuously changing, it is hard to predict behavior, and therefore properly asserting
I looked at Spring Test but it allows for testing outside a servlet container, so no JSP and no Javascript testing possible.
I saw DBUtils documentation but not sure if it will help me in this case
So, to my fellow developers, I would like to ask for tips to:
Run selenium tests on top of a mocked database
Allow different configs per test
Keep compatibility with Maven/Gradle
I have started with an ordered autowire feature to support this kind of stubbing.
It's basically an idea that i took over from the Seam framework i was working with in the past but i couldnt find yet a similar thing in spring.
The idea is to have a precedence annotation (fw, app,mock,...) that will be used to resolve the current implementation of an autowired bean. This is easy already in xml but not with java config.
So we have our normal repository beans in with app precedence and a test package stubbing these classes with mock precedence.
If both are in the classpath spring would normally fail with a duplicate bean found exception. In our case the extended beanfactory simply takes the bean with the highest precedence.
Im not sure if the order annotation of spring could be used directly but i prefered to have "well defined" precedence scopes anyway, so it will be clear for our developers what this is about.
! While this is a nice approach to stub so beans for testing i would not use it to replace a database definition but rather go with an inmemory database like hsql, like some previous answers mentionned already. !

Example use-cases for using Dependency Injection with the Play Framework

I am a big fan of Dependency Injection and the Play Framework, but am having trouble seeing how the two could be exploited together.
There are modules for Spring and Guice, but the way that Play works makes it hard for me to see how DI could be beneficial beyond some quite simple cases.
A good example of this is that Play expects JPA work to be done by static methods associated with the entity in question:
#Entity
Person extends Model {
public static void delete(long id) {
em().find(id).remove();
}
//etc
}
So there is no need for a PersonManager to be injected into controllers in the way it might for a Spring J2EE application. Instead a controller just calls Person.delete(x).
Obviously, DI is beneficial when there are interfaces with external systems, as the concrete implementation can be mocked for testing etc., but I don't see much benefit for a self-contained Play application.
Does anyone have any good examples? Does anyone use it to inject a Manager-style class into Controllers so that a number of operations can be done within the same transaction, for example?
I believe from this sentence you wrote:
"Does anyone have any good examples? Does anyone use it to inject a Manager-style class into Controllers so that a number of operations can be done within the same transaction, for example?"
that before answering the DI question I should note something: transactions are managed automatically by Play. If you check the model documentation you will see that a transaction is automatically created at the beginning of a request, and committed at the end. You can roll it back via JPA or it will be rolled back if an exception is raised.
I mention this because from the wording of your sentence I'm not sure if you are aware of this.
Now, on DI itself, in my (not-so-extensive) experience with DI, I've seen it used mainly to:
Load the ORM (Hibernate) factory/manager
Load Service classes/DAOs into another class to work with them
Sure, there are more scenarios, but these probably cover most of the real usage. Now:
The first one is irrelevant to Play as you get access to your JPA object and transaction automatically
The second one is quite irrelevant too as you mainly work with static methods in controllers. You may have some helper classes that need to be instantiated, and some may even belong to a hierarchy (common interface) so DI would be beneficial. But you could just as well create your won factory class and get rid of the jars of DI.
There is another matter to consider here: I'm not so sure about Guice, but Spring is not only DI, it also provides a lot of extra functionalities which depend on the DI module. So maybe you don't want to use DI in Play, but you want to take advantage of the Spring tools and they will use DI, albeit indirectly (via xml configuration).
The problem in my humble opinion on the static initialization approach of Play! is that it makes testing harder. Once you approach the HTTP vs Object Orientation problem with static members and objects that carries the HTTP message data (request and response) you make a trade of having to create new instances for each request by the ability of make your objects loosely coupled with the rest of your project classes.
One good example of a different design are servlets, it also extends a base class but it approaches the problem with a single instance creation (by default, because there are configurations that enable more instances).
I believe that maybe a mix of the two approaches would be better, having a singleton of each controller would give the same characteristics of a full static class and would allow dependency injection of some kinds of object. But not the objects with request or session scope, once the controller would need to be created every new request. Moreover it would improve testability by inverting the control of dependency injection, thus allowing arbitrary injection points.
Dependencies would be injected by the container or by a test, probably using mocks for the heavy stuff that much likely would already have been tested before.
In my point of view, this static model pushes the developer away from testing controllers because extending FunctionalTest starts the application server, thus paying the price of heavy objects like repositories, services, crawlers, http clients, etc. I don't want to wait a lot of objects to be bootstrapped just to check if some code was executed on the controller, tests should be quick and clear to make developers love them as their programming assistant/guide.
DI is not the ultimate solution to use everywhere... Don't use DI just because you have it in your hands... In play, you don't need DI to develop controllers/models etc... but sometimes it could be a nice design: IMO, you could use it if you have a service with a well know interface but you would like to develop this service outside Play and test it outside play and even test your play project with just a dummy service in order NOT to depend on the full service implementation. Therefore DI can be interesting: you plug the service loosely in play. In fact, this is the original use case for DI afaik...
I just wrote a blog post about setting up a Play Framework application with Google Guice. http://geeks.aretotally.in/dependency-injection-with-play-framework-and-google-guice
I see some benefits, especially when a component of your application requires a different behavior based on a certain context or something like that. But I def believe people should be selective about what goes into a DI context.
It shows again that you should only use dependencies injection if you really have a benefit. If you have complex services it's useful, but in many cases it's not. Read the chapter about models in the play-documentation.
So to give you an example where you can use DI with play. Perhaps you must make a complex calculation, or you create a pdf with a report-engine. There I think DI can be useful, specially for testing. There I think the guice-module and spring-module are useful and can help you.
Niels
As of a year and some change later, Play 2.1 now has support for dependency injection in controllers. Here's their demo project using Spring 3, which lays it out pretty clearly.
Edit: here's another example using Guice and Scala, if that's your poison.

Performance impact of using aop

We have started to use spring aop for cross cutting aspects of our application (security & caching at the moment).
My manager worries about the performance impact of this technology although he fully understands the benefits.
My question, did you encounter performance problems introduced by the use of aop (specifically spring aop)?
As long as you have control of your AOP I think it's efficient. We did have performance problems anyway, so by own reasoning we were not fully in control ;) This was mostly because it's important that anyone that writes aspects has full understanding of all the other aspects in the system and how they interrelate. If you start doing "smart" things you can outsmart yourself in a jiffy. Doing smart things in a large project with lots of people who only see small parts of the system can be very dangerous performance-wise. This advice probably applies without AOP too, but AOP lets you shoot yourself in the foot in some real elegant ways.
Spring also uses proxying for scope-manipluations and thats an area where it's easy to get undesired performance losses.
But given that you have control, the only real pain point with AOP is the effect on debugging.
If performance is going to be a concern, we have used AspectJ to great effect.
Because it uses bytecode weaving (compile time vs. runtime makes quite the difference) it's one of the fastest AOP frameworks out there. See: AOP Benchmarks
When I used it, I didn't - but then my application isn't your application.
If you use it for calls which are used in a very tight loop, there's the opportunity for a significant performance hit. If it's just used to check security once per request and cache various things, I can't see how it's likely to be significant - but that's why you should profile and benchmark your app.
I realise that "measure with your app" probably isn't the answer you were looking for, but it may well be the one you guessed you'd get :)
If you are using proxy-based AOP, you are talking about 1 additional Java method invocation per aspect applied. The performance impact there is pretty negligible. The only real concern is the creation of the proxies but this usually happens just once on application startup. The SpringSource blog has a great post on this:
http://blog.springsource.com/2007/07/19/debunking-myths-proxies-impact-performance/
In theory, if you use AOP do to what you could do with hard coupling, there is no performance issue, no overhead and no extra method calls unless you weave for nothing. AOP Framework offers you a way to remove the hard coupling and factorize your cross-cutting concern.
In practice, AOP Framework can introduce 3 types of overhead:
fire-time
interception mechanic
consumer integration (way to develop an advice)
For more details you can refer to when-is-aop-code-executed.
Just be careful how you implement an advice because transversal code is a temptation for boxing/unboxing and reflection (expensive in term of performance).
Without an AOP Framework (hard coupling your cross-cutting concerns) you can develop your presumed advices (dedicated for each treatment) easier without boxing/unboxing and reflection.
You have to know that most AOP Framework don't offer the way to avoid totally boxing/unboxing and reflection.
I developed one to respond to most of missing needs concentrated to 3 things :
user friendly (lightweight, easy to learn)
transparent (no breaking code to include)
efficient (no boxing/unboxing, no reflection in nominal user code and good interception mechanic)
You can find my open source project here : Puresharp API .net 4.5.2+ previously NConcern .NET AOP Framework
11 years after the question, look how degenerated this situation is.
Example: the vast majority think it is ok and normal to put a simple #Transactional spring java annotation to some method and let spring do the bridge between caller and callee proxied components. Now they have 20+ stackframes of undebuggable 'magic' code. The JIT compiler is rapidly exceeded and can no longer attempt inlining, or ends up bloating memory with tons of generated classes.
There is no limit to lazyness in this era of 'framework users'. No wonder e2e times for trivial http calls went from 100ms to 10 seconds. No wonder you need 2GB to run a lousy servlet container that used to run in 128MB. And don't get me started on the cost of logging exception stacktraces...
Have you ever thought about an AOP tools that adding aspects to object at runtime when you need? There is one for .net "Add Aspects to Object Using Dynamic Decorator" (http://www.codeproject.com/KB/architecture/aspectddecorator.aspx). I believe you can write a similiar one for Java.
If you are using some one framework for aspects there can be some performance issues .Next if you are creating abstraction above some one framework and aspects handling is done from framework then its very difficult to find out the cause of the problem relating to performance issues . If you are really concern about performance and small time slice concern more ,i suggest to write own aspects .No one want to reinvent the wheel but sometime for better it can be best.You can write own implementation of AOP alliance abstraction .
i have used spring AOP in a batch process in my current project to transaction manage a database.
At first, it was figured that there wouldn't be a performance problem, but we didn't figure into the equation that we called the database thousands of times. one aspect call in aop doesn't affect performance much, but multiply that by thousands, and it turns out the new system was worse than the old one, due to these extra method calls.
I'd say that aop is a great system to use, but try to take note on how many methods calls are added to your application

Resources