Client libraries in Spring Boot microservices - spring-boot

Three years ago I was participating as a developer on my first microservices project. I didn't know anything about microservices conceptions. That project was building as Spring Boot microservices. In general nothing special but all projects applied quite controversial way of integration between microservices based on client libraries. I think those client libraries were made by naive way. I'll try to give their main idea.
There are three modules in project: *-api, *-client and *-impl. The *-impl is a full-fledged REST-service and *-client is a client library for this REST-service. *-impl and *-client modules depend on the *-api (they import *-api as a maven dependency). The *-api in turn contains Java interfaces which should be implemented by #RestController classes from the *-impl module and by classes which implement functionality of client library for this REST-service (via RestTemplate or FeignClient). Also the *-api usually contains DTOs which may be covered by Bean Validation and Swagger annotations. In some cases those interfaces may contain #RequestMapping annotations from Spring-MVC. Thus implementation of #RestController and a FeignClient at the same time inherit that #RequestMapping.
*-api
#ApiModel
class DTO {
#NotNull
private String field;
// getters & setters
}
interface Api {
#RequestMapping("/api")
void method(DTO dto)
}
*-client
#FeignClient("api")
interface Client extends Api {
// void method(DTO) is inherited and implemented at runtime by Spring Cloud Feign
}
*-impl
#RestController
class ApiImpl implements Api {
void method(#Validated DTO dto) {
// implementation
}
}
Not hard to guess if some other microservice will pull *-client dependency it may get unpredictable transitive dependencies in their classpath. Also appears tightly coupling between microservices.
I decide to dedicate some time for researching this issue and discovered some concepts. First of all I got acquainted with widespread opinions like this one or from Sam Newman's famous Building Microservices book (chapter "Client Libraries"). Also I got knew about Consumer Driven Contracts and their implementations - Pact and Spring Cloud Contract. I decided if I will start a new project with Spring Boot microservices I'll try not to make client libraries and couple microservices by Consumer Driven Contracts only. Thus I hope to reach minimum of coupling.
After that project I was participating in the other one and it was building nearly by the same way as the first one regarding client libraries. I tried to share my researching with a team but I didn't get any feedback and all the team continued to make client libraries. After several months I left project.
Recently I became a developer on my third microservices project where Spring Boot is used too. And I faced that there also used the same way with client libraries as on prevous two projects. There I also couldn't get any feedback about Consumer Driven Contracts using.
I would like to know an opinion of community. Which way do you use on your projects? Is the above mentioned way with client libraries reasonable?
Appendix 1.
#JRichardsz's questions:
What do you mean by client? client of rest api is a kind of sdk provided by api owner to allow clients to consume it in an easy way
instead http low level implementations.
what do you mean with integrations? is test integrations what you need?
I think your requirement is related to how organize source code between several apis. Is it correct?
Answers:
Here I consider only Spring/Spring Cloud. If I build a microservice with Spring Boot and I want to interact/integrate (this is what I mean by "integrations") with another (micro)service I can use RestTemplate (it's a kind of a client library, isn't it?). If I would build a microservice with Spring Boot + Spring Cloud I could use
Spring Cloud OpenFeign for interactions (or integration) with another (micro)service. I think Spring Cloud OpenFeign is also a kind of a client library, isn't it?
In my general question I talk about custom client libraries which were created by teams where I worked. For example there are two projects: microserviceA and microserviceB. Each of these projects contain three maven modules: *-api, *-client and *-impl. It's implied that *-client maven module includes *-api maven module. Also *-api maven module used as a dependency in the *-impl maven module. When the microserviceA (microserviceA-impl maven module) wants to interact with the microserviceB it will import the microserviceB-client maven module. Thus microserviceA and microserviceB are tightly coupled.
By integrations I mean interactions between microservices. For example, microserviceA interacts/integrates with microserviceB.
My point concludes in opinion that microserviceA and microserviceB must not to have common source code (via client library). And that's why I ask these questions:
Which way do you use on your projects? Is the above mentioned way with
client libraries reasonable?
Appendix 2.
I'll try to explain in details and with examples.
Introduction.
When I participated in projects which were built as microservices they used the same way to implement interactions between microservices namely "client libraries". They are not the client libraries which incapsulate low level http interactions, serializing/deserializing of http body (and so on) as RestTemplate or FeighClient. They are custom client libraries which have the only purpose - to make interactions (request/response) with the only microservice. For example, there is some microservice-b which offers some microservice-b-client.jar (it's a custom client library) and microservice-a should use this jar for interact with microservice-b. It's very similar to RPC implementation.
Example.
microservice-b project
microservice-b-api maven module
pom.xml:
<artifactId>microservice-b-api</artifactId>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
</dependencies>
HelloController interface:
#Api("Hello API")
#RequestMapping("/hello")
public interface HelloController {
#PostMapping
HelloResponse hello(#RequestBody HelloRequest request);
}
HelloRequest dto:
#Getter
#Setter
#ApiModel("request model")
public class HelloRequest {
#NotNull
#ApiModelProperty("name property")
private String name;
}
HelloResponse dto:
#Getter
#Setter
#ApiModel("response model")
public class HelloResponse {
#ApiModelProperty("greeting property")
private String greeting;
}
microservice-b-client maven module
pom.xml:
<artifactId>microservice-b-client</artifactId>
<dependencies>
<dependency>
<groupId>my.rinat</groupId>
<artifactId>microservice-b-api</artifactId>
<version>0.0</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
</dependencies>
HelloClient interface:
#FeignClient(value = "hello", url = "http://localhost:8181")
public interface HelloClient extends HelloController {
}
microservice-b-impl maven module
pom.xml:
<artifactId>microservice-b-impl</artifactId>
<dependencies>
<dependency>
<groupId>my.rinat</groupId>
<artifactId>microservice-b-client</artifactId>
<version>0.0</version>
</dependency>
</dependencies>
MicroserviceB class:
#EnableFeignClients
#EnableSwagger2
#SpringBootApplication
public class MicroserviceB {
public static void main(String[] args) {
SpringApplication.run(MicroserviceB.class, args);
}
}
HelloControllerImpl class:
#RestController
public class HelloControllerImpl implements HelloController {
#Override
public HelloResponse hello(HelloRequest request) {
var hello = new HelloResponse();
hello.setGreeting("Hello " + request.getName());
return hello;
}
}
application.yml:
server:
port: 8181
microservice-a project
pom.xml:
<artifactId>microservice-a</artifactId>
<dependencies>
<dependency>
<groupId>my.rinat</groupId>
<artifactId>microservice-b-client</artifactId>
<version>0.0</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
</dependencies>
MicroserviceA class:
#Slf4j
#EnableFeignClients(basePackageClasses = HelloClient.class)
#SpringBootApplication
public class MicroserviceA {
public static void main(String[] args) {
SpringApplication.run(MicroserviceA.class, args);
}
#Bean
CommandLineRunner hello(HelloClient client) {
return args -> {
var request = new HelloRequest();
request.setName("StackOverflow");
var response = client.hello(request);
log.info(response.getGreeting());
};
}
}
Result of MicroserviceA run:
2020-01-02 10:06:20.623 INFO 22288 --- [ main] com.example.microservicea.MicroserviceA : Hello StackOverflow
Here you can see full example
Question.
I think this way of integration between microservices (via custom client libraries) is a wrong way. First of all microservices become tightly-coupled. Second - client library brings undesirable dependencies. Despite these circumstances the teams where I worked used that odd way to make integration between microservices. I would like to know is this way to make integration of microservices reasonable (correct)? Which is the best practice to make integrations between microservices?
P.S. In my opinion Spring Boot microservices should be coupled by Consumer Driven Contracts (Spring Cloud Contract or Pact) and nothing else. How do you think is it right way?

Here an strategy to build dozens of apis and test them. This works and I used it in my jobs.
Assuming that I work for acme.org and I need to develop two apis: employee-api and customer-api. You could use -microservice instead -api suffix.
Parents and Libraries
If I will develop several apis and apps with my team, we need to re-use code across our developments, so the first task before start the development is create our common libraries and relationships between them.
For this task I will recommend you:
use maven parents
constantly review the java code of world class libraries like : spring, mule esb, pentaho, apache, google/amazon sdks, etc. They have good ways to name their classes, libraries and relations ships. For example this strategy: starters of spring boot
Here some of my libraries which are maven projects (maven parents(.pom) and just libraries(.jar) :
acme-base
parent project with the java version <artifactId>maven-compiler-plugin</artifactId> for all applications and other general properties like project.build.sourceEncoding, etc
any java project in acme.org must use this parent. It would be a pain manage several apis with several java versions :s
acme-base-spring
parent project with the main spring classes and versions : spring-web, spring-core, spring-context
not all developments in acme.org will be an api rest. Could be libraries, schedules or demos, etc. So if they need some spring libraries, they must use this parent
acme-base-spring-boot-api
parent project with general spring boot configurations and starters.
spring boot, reduced all but if you will have several apps, we can reduce them even more.
this project must spring-boot-starter-parent as parent and acme-base-spring as super pom.
this project has this build configs and dependencies spring-boot-maven-plugin, spring-boot-starter-actuator, spring-boot-starter-test, spring-boot-devtools, spring-boot-starter-web and spring-boot-starter-tomcat
acme-base-api
this parent project must use acme-base-spring-boot-api as parent.
With these parents, your employee-api could have a minimal pom like:
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>org.acme.api</groupId>
<artifactId>employees-api</artifactId>
<version>1.0.0</version>
<packaging>jar</packaging>
<parent>
<groupId>org.acme.base</groupId>
<artifactId>acme-base-api</artifactId>
<version>1.0.0</version>
</parent>
</project>
employee-model
this project is not a parent. Is just a library (.jar)
goal of this project is store all the entities in acme.org
must use acme-base as parent.
if you will use jpa anotattions, add jpa libraries here.
employee-persistent
java library which use employee-model as dependency.
could store the daos or jpa repositoties
With these parents and dependencies, your employee-api could have a minimal pom like:
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>org.acme.api</groupId>
<artifactId>employees-api</artifactId>
<version>1.0.0</version>
<packaging>jar</packaging>
<parent>
<groupId>org.acme.base</groupId>
<artifactId>acme-base-api</artifactId>
<version>1.0.0</version>
</parent>
<dependencies>
<dependency>
<groupId>org.acme.api.employee</groupId>
<artifactId>employee-model</artifactId>
</dependency>
<dependency>
<groupId>org.acme.api.employee</groupId>
<artifactId>employee-persistent</artifactId>
</dependency>
</dependencies>
</project>
Then customer-api and employee-api have code which works in both of them, so a new library is required
acme-common
this project must be used as dependency in customer-api, employee-api and any other api in acme.org
Here a list of some common libraries across multiple rest apis:
acme-api-log
usage of logger and mdc to add metadata in each log entry
externalize logs with google stack driver, gray log, etc
acme-api-audit
add unique identifier in each request for an easy traceability or error support.
acme-api-errors
exceptions, controller advice, exceptions handlers, etc
acme-api-health
endpoins like /health /status which must be published for any api.
Test with Spring Cloud Contract
Nowadays, applications are thoroughly tested - whether it be unit tests, integration tests, or end-to-end tests. It's very common in a microservice architecture that a service (consumer) communicates with another service (producer) to complete a request.
To test them, we have two options:
#1 Deploy all microservices and perform end-to-end tests using a library like Selenium : require more infrastructure
#2 Write integration tests by mocking the calls to other services : the mocks won't reflect changes in the production api
In #2 approach , our integration test cases will still work just fine, because others apis were mocked The issue will likely be noticed in a staging or production environment, instead of the elaborate test cases.
Spring Cloud Contract provides us with the Spring Cloud Contract Verifier exactly for these cases. It creates a stub (.jar) from the producer (api) which can be used by the consumer service to mock the calls.
So, instead of making our local mocks, we can download the stubs from the producer (api) to create more real mocks
Recommended reading : https://stackabuse.com/spring-cloud-contract/
sdk or rest-client
As previous point said: It's very common in a microservice architecture that a service (consumer) communicates with another service (producer) to complete a request.
This is commonly implemented with RestTemplate.
I have another strategy: Develop a kind of sdk provided by any api rest. This sdk contains the low level or complex http invocation with RestTemplate: http methods, json binding, errors, etc
Example: If employee-api needs to consume some endpoint (/verify-existence) of customer-api we need:
customer-api-sdk or customer-api-rest-client. This library has the required source code to consume customer-api.
employee-api add customer-api-sdk as dependency. Instead of http low level implementation to consume customer-api, will only need:
CustomerApiPassport passport = new CustomerApiPassport();
passport.setBaseUrl("http://customer-api.com");
passport.etc();
CustomerApiSecurity security = new CustomerApiSecurity();
security.setToken("");
security.setBasicAuthentication("user", "password");
security.etc();
CustomerApiSdk customerSdk = new CustomerApiSdk();
customerSdk.setPassport(passport);
customerSdk.setSecurity(security);
VerifyCustomerExistenceRequest request = new VerifyCustomerExistenceRequest();
request.setCustomerPersonId("215456");
//consume /verify-existence endpoint
VerifyCustomerExistenceResponse response = customerSdk.verifyCustomerExistence(request);
response.exist();
customer-api-sdk is not only for employee-api. Could be used for any api which require consume some endpoint in customer-api
This customer-api-sdk could be used for mocks like jars generated for Spring Cloud Contract
The idea of sdk and passport were taken from : google and amazon sdks (perform http request to its platforms), soap client generated for axis, jaxws, etc

Disclaimer:
I don't think there is a single definitive answer to your questions. More specifically, I believe that the best solution changes over time based on the evolution of a project so the following can be considered in large part "personal opinions". My hope is that people will express their opinion like I did rather than expressing a vote on my answer, whether it's an upward or downward vote.
One of the purposes of microservices is indeed to "simplify" integration and evolution of a software product, which does in fact raise questions as to the benefit of locking the client to a common API library.
As there are even stories of companies migrating back from microservices to monoliths however, I would never dare stating that an approach is definitely wrong or definitely right.
In some cases it may not be such a bad idea to use client libraries as the additional burden can force undisciplined developers into coordinating and guaranteeing that an updated client will always be developed alongside with the actual service. Still, it wouldn't be my first choice unless I have specific needs, probably tied more to the variance in the level of skills and methodologies used by different development teams within a company.
I personally believe that the simplest possible approach (customer contracts) works well for applications with a small number of clients/customers (another microservice IS a client/customer) and allows to be immediately productive, which helps reducing the time to market/release while supporting the startup phase of a company.
As the company grows, the need for more structure kicks in and choices need to be reviewed due to the increased maintenance costs and frustration related to customer contracts, at which point the available information about the business and related needs greatly helps selecting the "next" way to go, which is probably customer-driven contracts by virtue of them being closed and complete, which is desirable for a number of reasons and also something you can achieve only after you learned what matters to your customers.
Painful experiences may lead to the choice of relying on client libraries but I believe that's uncommon and more likely to happen on "brand new" projects by virtue of the technical lead being left with "unprocessed trauma" from a previous project that overstayed the customer contracts pattern where it should have migrated to customer-driven contracts instead.
The key to me is making one critical choice at the beginning to make room for the possibility to almost completely change idea in the future, thus allowing to support future growth without having to "pull the plug" on older clients immediately as business continuity just doesn't allow for such a move. One way to do it is to give the API a codename included in the URL, thus allowing future versions to be neatly separated allowing a grace period for consumers to upgrade. The codename is, effectively, the name of the product your company is selling.
To explicitly try and answer your questions:
Which way do you use on your projects? It really depends on who's going to use my microservice. If it's a specific actor within my company and also using Java, I prefer to provide a full-blown client (with related sourcecode) when I'm exposing a microservice to them and ask that they do the same when they are exposing a microservice to me. This allows to avoid at least some of the problems like them blaming me because "it doesn't work" with the problem being instead in their client as well as preventing misunderstandings on the inner workings of a given microservice and pushing actors to develop microservices that are reasonably stable (that is: people will refrain from constantly changing the "signatures" in order to avoid having to rebuild the corresponding client... basically, I exploit our innate laziness). They are clearly not required to use my client and I am not required to use theirs either: it's more of a "statement" that the microservice works, meant to allow each party to figure out where the real problem is by inspecting each other's client code. This also allows to see how they code, which gives an insight on the quality that can be expected from the microservice's code and, thus, how robust and predictable is hopefully going to be.
Is the above mentioned way with client libraries reasonable? Is this way to make integration of microservices reasonable (correct)? Sometimes it is, sometimes it's not. I wouldn't use client libraries to be honest but for simple architectures it could work and could ensure that everybody is on the same page so it's not necessarily a bad thing.
Which is the best practice to make integrations between microservices? I believe it changes over time, based on the project. I would start letting consumers fend for themselves in favour of a quicker time-to-market, well aware that I will be necessarily starting with consumer contracts (even though I will try to future-proof the architecture to some extent) and let experience and growth solidify the architecture into a consumer-driven contracts one.
How do you think is it right way? The right way is the one that allows you to get a product to the market well before competitors but not so much that will hamper future growth. That's not much of an answer to be honest, but truth is your question is very difficult to answer and highly depending on the scope of a project. The point being that the right way will most likely change over time so you should aim at choosing a solution that you believe will allow growth for 3-5 years while at the same time providing a contingency that will allow you to gracefully migrate to one that will support growth for the subsequent 8-10 years. This also means that "the right way" is not just a technical matter but also a management approach to the business, specifically one that allows methodical planning for the future.

Related

Spring Beans Dependency Inyection with different configurations

I have the following doubt, probably a very basic one, that I have already managed to work out but I would like to listen if there is a different approach or actually if I am getting something wrong.
Background
I have an implementation with Springboot with a classic layered approach using Spring StereoTypes and wiring all up using Field DI (yes... I am aware it is not the best approach)
Service -> Repository -> (Something else)
In my case (something else) is a third party Rest API which I am calling using a RestTemplate with a specific configuration.The current solution has many services and repositories to deal with each of the Third Party domain entities. All of them using the same RestTemplate bean. The bean is inyected at the repository level.
Problem
So now I have been told from the Third Party System that depending on which business scenario my local services are executing, repositories need to use one of two different users, therefore, I assume that a different restTemplate config needs to be added. At first glance it drives me to move even higher the decision of which restTemplate to use. At Service level, not at the repo level. So I would need to have, lets say, a service A under a specific context whose dependencies (the repository) will need to have a specific template, and the same service A given another context, with a different dependency config.
Approach
The approach that I took is to have a configuration class where I generate different versions of the same service with different dependencies, in particular, their repositories using a specific template. Github Example
This approach seems like odd to me because up till now I have never had to do something like this ...and leaves me with the doubt if something different can be done to achive the same.
Another approach would be to inject both RestTemplates in the base repository and with an extra parameter to decide which to use in each method that it is being use at service level and repo level. Which I dislike.

How to capture metrics for REST end points via micrometer libraray

I am working on component that is based on spring framework. We have not yet moved to spring boot.
My requirement is to capture metrics(JVM/http/disk space) for my component which runs on an application server.
I came across micrometer library which can be utilized to capture such metrics and it can be integrated very well with Promotheus.
What I did was that I added the below dependency
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>1.7.5</version>
</dependency>
After adding the dependency I exposed a rest end point and added some simple logic to pull the metrics. Doing that I was able to fetch some basic JVM metrics. I referred the below link for this which explains how to capture metrics.
(https://micrometer.io/docs/ref/jvm)
However in addition to JVM metrics I also want to capture http request metrics(eg. the no of requests, time taken on the http calls to the rest services).
In my application there are quite many rest endpoints. Is there any way to do that. I was not able find any good documentation on that.
Any help would be highly appreciated.
Thanks
Sachin
As you said, Spring Boot does this out of the box so if you can move there, you don't need to do anything.
In order to instrument your web endpoints you can do a few things:
You can create a Filter and instrument all of your calls there.
This is what Spring Boot does, you can take a look/copy WebMvcMetricsFilter
You can add #Timer for your controllers and set-up TimedAspect
You can manually instrument your controllers see the docs
After following the above suggestions I was actually able to see my http metrics
I Simply created a configuration class annotated with #EnableAspectJAutoProxy and defined a bean inside the class as below
#Bean
public TimedAspect timedAspect() {
return new TimedAspect(registry);
}
And then added the #Timed annotation on my REST api POST methods and then I was able to see the statistics in Prometheus dashboard.
This really works!

Spring design pattern for common update service or module

I have a use case where I would like build a common interface or service which can update entities of application. Example case is shown as below:
Now every application has to handle update functionality of entities. Rather than implementing update functionality in n application module. I would like to build a common interface or server in spring boot.
Service will be like below:
My question is how to design service/interface which can used for above scenario. Any api or tool which can help me to achieve this. I dont want to write code for update in every application module.
Thanks in advance.
Last year I was thinking about the similar concept to yours, but in Apache Camel framework context. I haven't got enough time and motivation to do so, but your post encouraged me to give it a try - perhaps mostly because I've found your concept very similar to mine.
This is how I see it:
So basically I considered an environment with application that might uses N modules/plugins that enriches application's features, i.e. processing feature etc. Application uses module/plugin when it is available in the classpath - considering Java background. When the module is not available application works without its functionality like it was never there. Moreover I wanted to implement it purely using framework capabilities - in this case Spring - without ugly hacks/ifs in the source code.
Three solutions come to my mind:
- using request/response interceptors and modifying(#ControllerAdvice)
- using Spring AOP to intercept method invocations in *Service proxy classes
- using Apache Camel framework to create a routes for processing entities
Here's the brief overview of POC that I implemented:
I've chosen Spring AOP because I've never been using it before on my own.
simple EmployeeService that simulates saving employee - EmployeeEntity
3 processors that simulates Processing Modules that could be located outside the application. These three modules change properties of EmployeeEntity in some way.
one Aspect that intercepts "save" method in EmployeeService and handles invocation of available processors
In the next steps I'd like to externalize these Processors so these are some kind of pluggable jar files.
I'm wondering if this is something that you wanted to achieve?
link to Spring AOP introduction here: https://docs.spring.io/spring/docs/5.0.5.RELEASE/spring-framework-reference/core.html#aop
link to repository of mentioned POC: https://github.com/bkpawlowski/spring-aop

Is there any reason not to co-locate small spring-boot starters with autoconfigures?

spring-boot uses "starters" to define a set of libraries dependencies a project may include. This maven module, a starter for Jersey, for example:
https://github.com/spring-projects/spring-boot/tree/master/spring-boot-starters/spring-boot-starter-jersey
spring-boot uses autoconfigures to instantiate and configure the classes in the starter module. The autoconfigure for Jersey:
https://github.com/spring-projects/spring-boot/blob/master/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/jersey/JerseyAutoConfiguration.java
Is there any reason not to co-locate new 3rd party/private starters in the same module as the autoconfigures, instead of split across separate modules like the example above?
It's a good question, and one that we have debated inconclusively in the Spring Boot engineering team, so there is no correct answer. Relevant points for discussion:
spring-boot-autoconfigure started out as quite a small library (pre 1.0). It has grown now to the extent that its pom is kind of ridiculously long (and nearly everything is optional).
Starters perform more than one function, in the sense that they can be used to activate bits of autoconfiguration, but also (and more importantly)
, they are an opinionated, curated set of dependencies that might (and usually does) go beyond the minimum needed to activate some autoconfiguration. This is basically Marten's point.
Spring Boot .NEXT (1.3 at the point) will more than likely unbundle spring-boot-autoconfigure into multiple modules (e.g. there might be one for jetty, or maybe one for servlet containers, or something esle).
When the unbundling happens we might see some starters becoming obsolete (who knows at this point), but I suspect they will still exist. Spring Cloud has a lot of unbundled autoconfiguration, but still has starters, for instance.
For small third party libraries I see no reason why starters and autoconfiguration shouldn't be co-located. I think the only justification for allowing this thread on stackoverflow is if it can be re-worded to make the question more obviously about that.

Maven 3 modules

I am about to start a project. We'll be using Spring MVC, RestEasy, Spring Batch and Spring Security.
Does it make sense to have a module for each of these, e.g.:
Main_Project
---pom.xml
---Module_Project
---pom.xml
---Module_MVC
---pom.xml
---Module_Rest
---pom.xml
---Module_Batch
---pom.xml
---Module_Security
---pom.xml
Not sure what the best practice is?
Or, should I be using one module?
Thanks,
adi
At first sight it don't make sense.
Since you already know what technologies you need, I guess you already have an idea on how to organize your own code. And this is your own code organization that must drive your modules (not the frameworks you are using).
A general approach that can work (at least it can be a starting point to elaborate an architecture for a traditional web based application):
one module with your model (i.e. database layer, dao, persistent beans,...) - packaging jar
one module with your controllers (i.e. access to database layer, transaction management, business logic, ...) - packaging jar
one module (front layer) with your view files (if any) (jsp, ...) - packaging war
one module (front layer) with your webservices definition (if any) - packaging war
Ignore the frameworks. Split your modules until you can answer "no" to these 2 questions for each module:
"Am I mixing view/controller logic with business logic?"
"Am I mixing features?"
Remember to declare the frameworks in the parent pom.xml so the modules can share the exactly same dependencies.
Do not order your modules by framework. (Frameworks are dependencies that you add in your modules where you need them, maybe like this:
<project>
<groupId>com.ourproject</groupId>
<artifactId>myfeature</artifactId>
<version>0.0.1-SNAPSHOT</version>
...
<dependencies>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId>
<version>3.2.2.RELEASE</version>
</dependency>
</dependencies>
</project>
There are many different approaches to how to organize your project.
The approach I am currently using organizes software along features. Each feature is then "subdivided" via good old java packages. This way I pack business logic, data access objects and specific resources, all belonging to a specific feature, into one module.
The upside is that you don't have to stretch yourself very hard looking for everything that belongs to the feature, allowing you to introduce or remove features as you wish.
The downside is that you'll have to pack objects that are used by all feature-modules (cross cutting concerns or even common parent-classes) into separate modules and add them as yet another dependendy in each of your modules.

Resources