Java 8 Application layer and specific output transformation - java-8

I have a gradle multiproject with 2 subprojects trying to emulate an hexagonal architecture :
rest-adapter
application layer
I don't want the application services to expose the domain models and do'nt want to force a specific representation as output. So I would like something like application services consume 2 args (a command and something) and return a T. The client configures the service.
The rest adapter doesn't ave access to the domain model, so I can't return the domain models and let the adapter creates its representation.
What about the something. I tried :
have a signature <T> List<T> myUseCase(Command c, Function<MyDomainModel, T> fn). The application layer is the owner of transformations functions (because the signature uses MyDomainModel) and exposes a dictionnary of function. So the rest controller references one of the Fn. It works. And I'm searching of a better way. More elegant way if it exists.
have a signature <T> List<T> myUseCase(Command c, FnEnum fn) For each enum I have associated a Function. With this, I found the signature more elegant : the consumer provides which transformation it wants from an enum. But doesn't work cause the generic method doesn't compile. The cannot be resolved. Currently, I didn't find a way.
something with java 8 consumer or supplier or something else but I failed to wrap my head around.
I'm feeling there's a more elegant solution for this kind of problem : a service which accepts a function that transforms and build an output that the client provides.

I think that what you need to implement is the so called "Data Transformer" pattern.
Imagine that you have a use case that returns a certain domain object (for example "User"), but you shouldn't expose domain to clients. And you want every client to choose the format of the returned data.
So you define a data transformer interface for the domain object:
public interface UserDataTransformer {
public void write ( User user );
public String read();
}
For every output format your clients need you define a class implementing the interface. For example if you want to represent the User in XML format:
public class UserXMLDataTransformer implements UserDataTransformer {
private String xmlUser;
#Override
public void write(User user) {
this.xmlUser = xmlEncode ( user );
}
private String xmlEncode(User user) {
String xml = << transform user to xml format >>;
return xml;
}
#Override
public String read() {
return this.xmlUser;
}
}
Then you make your application service depends on the data trasnsformer interface, you inject it in the constructor:
public class UserApplicationService {
private UserDataTransformer userDataTransformer;
public UserApplicationService ( UserDataTransformer userDataTransformer ) {
this.userDataTransformer = userDataTransformer;
}
public void myUseCase ( Command c ) {
User user = << call the business logic of the domain and construct the user object you wanna return >> ;
this.userDataTransformer.write(user);
}
}
And finally, the client could look something like this:
public class XMLClient {
public static void main ( String[] args ) {
UserDataTransformer userDataTransformer = new UserXMLDataTransformer();
UserApplicationService userService = new UserApplicationService(userDataTransformer);
Command c = << data input needed by the use case >>;
userService.myUseCase(c);
String xmlUser = userDataTransformer.read();
System.out.println(xmlUser);
}
}
I've consider that the output is a String, but you could use generics maybe to return any type you want.
I haven't mentioned it, but this approach injecting the transformer into the application service follows the "port and adapters" pattern. The transformer interface would be the port, and every class implementing it would be an adapter for the desired format.
Also, this was just an example. You can use a dependency injection framework like Spring in order to create the component instances and wire them all. And also you should use the composition root pattern to do it.
Hope this example helped.

I'm feeling there's a more elegant solution for this kind of problem : a service which accepts a function that transforms and build an output that the client provides.
You are sending data across the boundary between the application and the REST layer (and presumably between the application and the REST consumer); it may be useful to think about messaging patterns.
For example, the application can define a service provider interface that defines a contract/protocol for accepting data from the application.
interface ResponseBuilder {...}
void myUseCase(Command c, ResponseBuilder builder)
The REST adapter provides an implementation of the ResponseBuilder that can take the inputs and generate some useful data structure from them.
The response builder semantics (the names of the functions in the interface) might be drawn from the domain model, but the arguments will normally be either primitives or other message types.
CQS would imply that a query should return a value; so in that case you might prefer something like
interface ResponseBuilder<T> {
...
T build();
}
<T> T myUseCase(Command c, ResponseBuilder<T> builder)
If you look carefully, you'll see that there's no magic here; we've simply switched from having a direct coupling between the application and the adapter to having an indirect coupling with the contract.
EDIT
My first solution is using a Function<MyDomainModel, T> which is a bit different from your ResponseBuilder ; but in the same vein.
It's almost dual to it. You'd probably be a little bit better off with a less restrictive signature on myUseCase
<T>
List<T> myUseCase(Command c, Function<? super MyDomainModel, T> fn)
The dependency structure is essentially the same -- the only real difference is what the REST adapter is coupled to. If you think the domain model is stable, and the output representations are going to change a lot, then the function approach gives you the stable API.
I suspect that you will find, however, that the output representations stabilize long before the domain model does, in which case the ResponseBuilder approach will be the more stable choice.

Related

Normalize response body in spring boot

I do have some entity class (code without annotations for simplified example)
class User {
public String id;
public String name;
}
Now I want to output this via an API, but I want to structure my response in a special format, like
{
"data": {
"id": 1,
"name": "mars3142"
}, // user object or another entity or list...
"meta": ...,
"error": ...
}
The meta and/or error data should only be visible in special situations (like RuntimeExceptions). Where is the best place to transform my entity results into the normalized response? Do I need to write a filter for that? Does anybody has a sample code for that?
I would suggest to implement something this:
public abstract class BaseResponse {
// Meta data
// Consider defining fields here needed for happy-path and error-responses
// Contains common tracking fields, e.g. correlationId, requestId
}
public class ErrorResponse extends BaseResponse {
// Error Fields
}
public class Response extends ErrorResponse {
// Entity-object in your case
}
I guess you can build your response like setting response from DAO to above suggested structure in controller layer. For error-responses (in case of RuntimeExceptions), they're standardly build and returned in #ControllerAdvice or other.
Some patterns of exception handling are explained in Error Handling for REST with Spring | Baeldung.
Regarding your 2 questions:
Design: The proper place for this response-mapping depends on the scope (all responses or just some) and existing components in your application's response layer.
Patterns and Web-Framework concepts: I would not use the response-filters or -interceptors of your web-framework. Those should be used for cross-cutting concerns, or for chained processes (e.g. security, authorization, enrichment, sanitation).
Instead I would use the web-frameworks concepts and components that are responsible for response-representations, like ResponseEntity (HTTP-response representation, ControllerAdvice (error-handling), HttpMessageConverter.
There are 3 ways you could "wrap" your objects into uniform JSON-response models:
Annotate class with the custom #JsonRootName as data and in special cases add meta and/or error attributes (through e.g. embedding into a wrapper or using a mixin)
A JSON custom serializer that could extend from BeanSerializer which wraps this and any class uniformly in your given outer structure
Modify Spring's MappingJackson2HttpMessageConverter to wrap any returned response object into the predefined JSON-structure
You could iterate from the simplest (1.) to the most complex (3.). Some iteration code (like 2.) can be reused in the next (3.).
1. Use a Wrapper Class
The first is rather a simple start where you can implement the "normalization" within controller-methods. You could for example put the object (serialized as data) into the "empty" meta-structure (wrapper-class) with an empty JsonNode, and meta or error properties.
2. Define a Custom Serializer
The second is pretty flexible and can be tested well in isolation (not even depending on Spring). It would allow to implement the complete object-wrapping in one place.
3. Customize Spring's HTTP Message Converter
The third is similar to the second but requires some knowledge about Spring's message-converters and allows you to transform each response-object to a specific JSON-response using Jackson's ObjectMapper.
Sample code can be found online, e.g. at Baeldung's Jackson or Spring tutorials, Springframework Guru articles.
I used the solution from https://stackoverflow.com/a/72355056/708157 and transformed it a little bit.
Now my classes are that way
public class BaseResponse<T> {
boolean success;
T data;
Error error;
}
public class Error {
...
}
And every api response is now ResponseEntity<BaseResponse<XYZ>>. This way, I can setup my default structure and my classes are lose coupled, because I can use every class for T within my BaseResponse.

Multiple writers for different types in the same Spring Batch step

I am writing a Spring Batch application with the following workflow:
Read some items of type A (using a FlatFileItemReader<A>).
Process an item, transforming it from A to B.
Write the processed items of type B (using a JdbcBatchItemWriter<B>)
Eventually, I should call an external service (a RESTful API, but it could be a SimpleMailMessageItemWriter<A>) using data from the source type A.
How can I configure such a workflow?
So far, I have found the following workaround:
Configuring a CompositeItemWriter<B> which delegates to:
The actual ItemWriter<B>
A custom ItemWriter<B> implementation which converts B back to A and then writes an A
But this is a cumbersome solution because it forces me to either:
Duplicate processing logic: from A to B and back again.
Sneakily hide some attributes from the source object A inside B, polluting the domain model.
Note: since my custom item writer for A needs to invoke an external service, I would like to perform this operation after B has been successfully written.
Here are the relevant parts of the batch configuration code.
#Bean
public Step step(StepBuilderFactory steps, ItemReader<A> reader, ItemProcessor<A, B> processor, CompositeItemWriter<B> writer) {
return steps.get("step")
.<A, B>chunk(10)
.reader(reader)
.processor(processor)
.writer(writer)
.build();
}
#Bean
public CompositeItemWriter<B> writer(JdbcBatchItemWriter<B> jdbcBatchItemWriter, CustomItemWriter<B, A> customItemWriter) {
return new CompositeItemWriterBuilder<B>()
.delegates(jdbcBatchItemWriter, customItemWriter)
.build();
}
For your use case, I would encapsulate A and B in a wrapper type, such AB:
class AB {
private A originalItem;
private B transformedItem;
}
With that, you would have: ItemReader<A>, ItemProcessor<A, AB> and ItemWriter<AB>. The processor creates instances of AB in which it keeps a reference to the original item. The writer can then get access to both types and delegate to the JdbcBatchItemReader<B> and SimpleMailMessageItemWriter<A> as needed, something like:
class ABItemWriter implements ItemWriter<AB> {
private JdbcBatchItemWriter<B> jdbcBatchItemWriter;
private SimpleMailMessageItemWriter mailMessageItemWriter;
// constructor with delegates
#Override
public void write(List<? extends AB> items) throws Exception {
jdbcBatchItemWriter.write(getBs(items));
mailMessageItemWriter.write(getAs(items)); // this would not be called if the jdbc writer fails
}
}
The methods getAs and getBs would extract items of type A/B from AB. Encapsulation for the win! BTW, a Java record is a good option for type AB.

When do I use org.springframework.format.Printer interface?

When I use spring-mvc, I always use spring formatter to format a string to Java Bean. So, I implement org.springframework.format.Formatter interface.
Despite I implement two methods(print(),parse()), the print method has never been used. Because I just use parse()method format a String to Java Bean and never format Java Bean to String.
So, my question is, what situation does the print() will be called? Or when I need format a Java Bean to String.
Formatters can be used for parsing and printing Dates, Timestamps, and general numeric data.
This means, as you already pointed out, that we can customize the way parsing is handled for a specific type by overriding the parse() method.
It also means that we can provide custom print behaviour by overriding the print() method.
So, what's print's use case?
Let's put an example.
Suppose you have a serial number type composed of 4 segments —e.g., 1111-2222-3333-4444.
public class SerialNumber {
private int segment1;
private int segment2;
private int segment3;
private int segment4;
//Some getters and setters;
}
Now, we'll implement a Formatter class that can parse an input like the one shown above. For brevity's sake I'll avoid try-catch blocks and validation logic:
public class SerialNumberFormatter implements Formatter {
public SerialNumber parse(String input, Locale locale) {
//Some code here to validate input
//Split the input into segments
String[] result = speech.split("-");
return new SerialNumber(Integer.parseInt(result[0],
Integer.parseInt(result[1],
Integer.parseInt(result[2],
Integer.parseInt(result[3])
}
}
So, this way we managed to parse the Serial Number. But we already knew this. Now, let's suppose we'd like to show a SerialNumber stored in our DB to the end-user in the following format "SN: 1111-2222-3333-4444". We need to print the object somehow. That logic can be implemented inside the print() method.
Completing our Formatter class:
public class SerialNumberFormatter implements Formatter {
public SerialNumber parse(String input, Locale locale) {
//Some code here to validate input
//Split the input into segments
String[] result = speech.split("-");
return new SerialNumber(Integer.parseInt(result[0],
Integer.parseInt(result[1],
Integer.parseInt(result[2],
Integer.parseInt(result[3])
}
public String print(SerialNumber sn, Locale locale) {
//Some code here to validate the sn
return String.format("SN: %d-%d-&d-%d", sn.getSegment1,
sn.getSegment2,
sn.getSegment3,
sn.getSegment4)
}
}
To sum up, Formatter allows us to encapsulate logic related to parsing and printing, that otherwise would need to be coded into a business or service layer.
Remember that a Formatter can also be composed of Parser and Printer classes. So, a print() method may contain printing logic much more complicated than just building a String. It could encapsulate logic to format a POJO and write it to a file. It could also return a JSON or an XML as output.
Take it mainly as an abstraction that lets us decouple formatting logic from the rest of our code.
Formatter interface is also a core piece of the Spring Framework. You can implement custom Formatters and then register them into the framework for different tasks. You can take a look at this Baeldung's article: https://www.baeldung.com/thymeleaf-in-spring-mvc
In Section 8, you'll appreciatte how a Formatter is implement and then registered to Spring MVC to convert and show data in the front-end.

Multi-Column Search with Spring JPA Specifications

I want to create a multi field search in a Spring-Boot back-end. How to do this with a Specification<T> ?
Environment
Springboot
Hibernate
Gradle
Intellij
The UI in the front end is a Jquery Datatable. Each column allows a single string search term to be applied. The search terms across more than one column is joined by a and.
I have the filters coming from the front end already getting populated into a Java object.
Step 1
Extend JPA Specification executor
public interface SomeRepository extends JpaRepository<Some, Long>, PagingAndSortingRepository<Some, Long>, JpaSpecificationExecutor {
Step2
Create a new class SomeSpec
This is where I am lost as to what the code looks like it and how it works.
Do I need a method for each column?
What is Root and what is Criteria Builder?
What else is required?
I am rather new at JPA so while I don't need anyone to write the code for me a detailed explanation would be good.
UPDATE
It appears QueryDSL is the easier and better way to approach this. I am using Gradle. Do I need to change my build.gradle from this ?
If you don't want to use QueryDSL, you'll have to write your own specifications. First of all, you need to extend your repository from JpaSpecificationExecutor like you did. Make sure to add the generic though (JpaSpecificationExecutor<Some>).
After that you'll have to create three specifications (one for each column), in the Spring docs they define these specifications as static methods in a class. Basically, creating a specification means that you'll have to subclass Specification<Some>, which has only one method to implement, toPredicate(Root<Some>, CriteriaQuery<?>, CriteriaBuilder).
If you're using Java 8, you can use lambdas to create an anonymous inner class, eg.:
public class SomeSpecs {
public static Specification<Some> withAddress(String address) {
return (root, query, builder) -> {
// ...
};
}
}
For the actual implementation, you can use Root to get to a specific node, eg. root.get("address"). The CriteriaBuilder on the other hand is to define the where clause, eg. builder.equal(..., ...).
In your case you want something like this:
public class SomeSpecs {
public static Specification<Some> withAddress(String address) {
return (root, query, builder) -> builder.equal(root.get("address"), address);
}
}
Or alternatively if you want to use a LIKE query, you could use:
public class SomeSpecs {
public static Specification<Some> withAddress(String address) {
return (root, query, builder) -> builder.like(root.get("address"), "%" + address + "%");
}
}
Now you have to repeat this for the other fields you want to filter on. After that you'll have to use all specifications together (using and(), or(), ...). Then you can use the repository.findAll(Specification) method to query based on that specification, for example:
public List<Some> getSome(String address, String name, Date date) {
return repository.findAll(where(withAddress(address))
.and(withName(name))
.and(withDate(date));
}
You can use static imports to import withAddress(), withName() and withDate() to make it easier to read. The where() method can also be statically imported (comes from Specification.where()).
Be aware though that the method above may have to be tweaked since you don't want to filter on the address field if it's null. You could do this by returning null, for example:
public List<Some> getSome(String address, String name, Date date) {
return repository.findAll(where(address == null ? null : withAddress(address))
.and(name == null ? null : withName(name))
.and(date == null ? null : withDate(date));
}
You could consider using Spring Data's support for QueryDSL as you would get quite a lot without having to write very much code i.e. you would not actually have to write the specifictions.
See here for an overview:
https://spring.io/blog/2011/04/26/advanced-spring-data-jpa-specifications-and-querydsl/
Although this approach is really convenient (you don’t even have to
write a single line of implementation code to get the queries
executed) it has two drawbacks: first, the number of query methods
might grow for larger applications because of - and that’s the second
point - the queries define a fixed set of criterias. To avoid these
two drawbacks, wouldn’t it be cool if you could come up with a set of
atomic predicates that you could combine dynamically to build your
query?
So essentially your repository becomes:
public interface SomeRepository extends JpaRepository<Some, Long>,
PagingAndSortingRepository<Some, Long>, QueryDslPredicateExecutor<Some>{
}
You can also get request parameters automatically bound to a predicate in your Controller:
See here:
https://spring.io/blog/2015/09/04/what-s-new-in-spring-data-release-gosling#querydsl-web-support
SO your Controller would look like:
#Controller
class SomeController {
private final SomeRepository repository;
#RequestMapping(value = "/", method = RequestMethod.GET)
String index(Model model,
#QuerydslPredicate(root = Some.class) Predicate predicate,
Pageable pageable) {
model.addAttribute("data", repository.findAll(predicate, pageable));
return "index";
}
}
So with the above in place it is simply a Case of enabling QueryDSL on your project and the UI should now be able to filter, sort and page data by various combinations of criteria.

Using DI to cache a query for application lifetime

Using a DI container (in this case, Ninject) is it possible - - or rather, wise to cache a frequently used object for the entire application lifetime (or at least until it is refreshed)?
To cite example, say I have a Template. There are many Template objects, but each user will inherit at least the lowest level one. This is immutable and will never change without updating everything that connects to it (so it will only change on administration demand, never based on user input). It seems foolish to keep querying the database over and over for information I know is not changed.
Would caching this be best done in my IoC container, or should I outsource it to something else?
I already store ISessionFactory (nHibernate) as a Singleton. But that's a little bit different because it doesn't include a query to the database, just the back-end to open and close ISession objects to it.
So basically I would do something like this..
static class Immutable
{
[Inject]
public IRepository<Template> TemplateRepository { get; set; }
public static ITemplate Template { get; set; }
public void Initialize()
{
if(Immutable.Template == null)
{
Immutable.Template = TemplateRepository.Retrieve(1); // obviously better logic here.
}
}
class TemplateModule : Module
{
public void Load()
{
Bind<ITemplate>().ToMethod(() => Immutable.Initialize())InSingletonScope();
}
}
Is this a poor approach? And if so, can anyone recommend a more intelligent one?
I'd generally avoid using staticness and null-checking from your code - create normal classes without singleton wiring by default and layer that aspect on top via the container. Ditto, remove reliance on property injection - ctor injection is always better unless you have no choice
i.e.:
class TemplateManager
{
readonly IRepository<Template> _templateRepository;
public TemplateManager(IRepository<Template> templateRepository)
{
_templateRepository = templateRepository;
}
public ITemplate LoadRoot()
{
return _templateRepository.Retrieve(1); // obviously better logic here.
}
}
class TemplateModule : Module
{
public void Load()
{
Bind<ITemplate>().ToMethod(() => kernel.Get<TemplateManager>().LoadRoot()).InSingletonScope();
}
}
And then I'd question whether TemplateManager should become a ninject provider or be inlined.
As for the actual question... The big question is, how and when do you want to control clearing the cache to force reloading if you decided that the caching should be at session level, not app level due to authorization influences on the template tree? In general, I'd say that should be the Concern of an actual class rather than bound into your DI wiring or hardwired into whether a class is a static class or is a Singleton (as in the design pattern, not the ninject Scope).
My tendency would be to have a TemplateManager class with no static methods, and make that a singleton class in the container. However, to get the root template, consumers should get the TemplateManager injected (via ctor injection) but then say _templateManager.GetRootTemplate() to get the template.
That way, you can:
not have a reliance on fancy ninject providers and/or tie yourself to your container
have no singleton cruft or static methods
have simple caching logic in the TemplateManager
vary the Scoping of the manager without changing all the client code
have it clear that getting the template may or may not be a simple get operation
i.e, I'd manage it like so:
class TemplateManager
{
readonly IRepository<Template> _templateRepository;
public TemplateManager(IRepository<Template> templateRepository)
{
_templateRepository = templateRepository;
}
ITemplate _cachedRootTemplate;
ITemplate FetchRootTemplate()
{
if(_cachedRootTemplate==null)
_cachedRootTemplate = LoadRootTemplate();
return _cachedRootTemplate;
}
ITemplate LoadRoot()
{
return _templateRepository.Retrieve(1); // obviously better logic here.
}
}
register it like so:
class TemplateModule : Module
{
public void Load()
{
Bind<TemplateManager>().ToSelf().InSingletonScope();
}
}
and then consume it like so:
class TemplateConsumer
{
readonly TemplateManager _templateManager;
public TemplateConsumer(TemplateManager templateManager)
{
_templateManager = templateManager;
}
void DoStuff()
{
var rootTempalte = _templateManager.FetchRootTemplate();
Wild speculation: I'd also consider not having a separate IRepository being resolvable in the container (and
presumably having all sorts of ties into units of work). Instead, I'd have the TemplateRepository be a longer-lived thing not coupled to an ORM layer and Unit Of Work. IOW having a repository and a Manager none of which do anything well defined on their own isnt a good sign - the repository should not just be a Table Data Gateway - it should be able to be the place that an Aggregate Root such as Templates gets cached and collated together. But I'd have to know lots more about your code base before slinging out stuff like that without context!

Resources