Is it possible to register 2 Converters (#ReadConverter annotated class) in Spring Boot/R2dbc so that I can have:
A service that uses a repository for an object and reads it using 1 converter for example R2dbcEntityTemplate
Another service that uses the other converter, for example using a ReactiveCrudRepository
Reason is because I encrypt/decrypt a field in the read/write converter that it's not needed by one of the service.. Encryption/Decryption seems to be expensive
Thanks
Related
I was wondering, how a spring converter implementation gets an transactional scope.
Having a Converter which is used to convert a path placeholder (entity id) in an (rest) controller to the Entity itself. The Entity thus is loaded via Hibernate from the database.
For the details:
spring boot 2.3.2
A rest controller endpoint which triggers the converter to expand a path-placeholder (prior executing the rest-endpoint method body)
If in question, the controller method is not annotated #Transactional either
Question:
How (since we disabled OSIV) does this Converter get it's Transactional scope if the convert neither the method/nor the converter class is annotated using Transactional nor the convert method does custom transaction handling?
If you mean how Spring Data repositories get a transaction, the answer is all repository methods are transactional by default.
I am having trouble accessing and writing data from/to mongoDB using spring mongoTemplate.
For starters I have a data-model that represents the object that I am trying to retrieve from mongo. I have it annotated with #JsonSerialize and #JsonDeserialize for specifying custom converters.
However when I invoke mongoTemplate.findById(), and try to get this object, I find that my custom deserializer is not invoked at all and I get HttpMessageNotWriteableException.
Is there any other configuration that must be put in place to let mongo know that it needs to use my custom Jackson deserializer?
You can use this for reference: https://gist.github.com/letalvoj/978e6c975398693fc6843c5fe648416d
Last year when I started learning spring I remember an article that expalined that not all pojos are to be defined as beans. So I'm planning to create a web app say EmployeeMaintenance. CRUD Functionalities are there. I will surely end up creating pojos like Employee , EmployeeSupplementaryDetails, Address etc. Should I makes these beans and configure them in xml or annotate whatever. I'm sure my understanding of bean is not enough. I know that Dbconn , services etc should definitely be declared as bean.
All I need is what are the parameters I should consider before I make a pojo a Spring Bean.
General rule for making Java class a Spring bean is to ask yourself if you need to inject object into some other object or if you want the object to be managed by Spring components. The classes you listed as example doesn't seem to be good candidates for being Spring beans - they represent model data and will be passed as some service's methods parameters.
Examples of typical CRUD application beans:
EmployeeService (you would want to inject it to controller or other service)
EmployeeRepository / EmployeeDao
EmployeeController (it will be bean managed by Spring's MVC framework)
etc.
Spring beans are building blocks of your application. Let's suppose you need to create web application for storing and managing employees records (creating, retrieving, updating, deleting). What you will need is web controller that will handle incoming requests for several operations (EmployeeController). Good practice is that controller doesn't implement any business logic - all it should do is to delegate work to service beans. So you would need bean like EmployeeService. Then controller would ask service to do some work (give me employees list / remove John Smith from database / change salary for Ann Jackson / etc). So service will be controller's dependency (service is injected into controller). Service can also have some dependencies (like repository object, which is responsible for handling communication with data storage) and these dependencies would be injected to service. Dependency management is Spring's core feature.
Good practice in object oriented programming is to have small classes that are responsible for doing one kind of actions. Such objects are much easier to test and understand. The more classes you have, the building application from blocks is harder, so it's worth to delegate it to framework like Spring. Without Spring you would need to create controller class, then inside of it create service, then inside of it create other dependencies and so on. Spring does it for you, so all you need to do is to declare dependencies and they will be injected automatically. If you want to replace implementation of your service with another (for example repository that used XML file for storing data with repository storing data in relational database) then you just have to change your bean definition.
Regarding to beans managed by Spring, typical example is database transaction manager (eg. org.springframework.orm.jpa.JpaTransactionManager). If you define such bean, and declare which methods should be transactional, then Spring will take care of transactions management (will open, commit or rollback transactions automatically).
I have been looking at various examples of how to use Spring with REST. Our end target is a Spring HATEOAS/HAL setup
I have seen two distinct methods for rendering REST within Spring
Via #RestController within a Controller
Via #RepositoryRestResource within a Repository
The thing I am struggling to find is why would you use one over the other. When trying to implement HAL which is best?
Our database backend is Neo4j.
Ok, so the short story is that you want to use the #RepositoryRestResource since this creates a HATEOAS service with Spring JPA.
As you can see here adding this annotation and linking it to your Pojo you have a fully functional HATEOAS service without having to implement the repository method or the REST service methods
If you add the #RestController then you have to implement each method that you want to expose on your own and also it does not export this to a HATEOAS format.
There is a third (and fourth) option that you have not outlined, which is to use either #BasePathAwareController or #RepositoryRestController, depending on whether you are performing entity-specific actions or not.
#RepositoryRestResource is used to set options on the public Repository interface - it will automatically create endpoints as appropriate based on the type of Repository that is being extended (i.e. CrudRepository/PagingAndSortingRepository/etc).
#BasePathAwareController and #RepositoryRestController are used when you want to manually create endpoints, but want to use the Spring Data REST configurations that you have set up.
If you use #RestController, you will create a parallel set of endpoints with different configuration options - i.e. a different message converter, different error handlers, etc - but they will happily coexist (and probably cause confusion).
Specific documentation can be found here.
Well, above answers are correct in their context still I am giving you practical example.
In many scenarios as a part of API we need to provide endpoints for searching an entity based on certain criteria. Now using JPA you don't have to even write queries, just make an interface and methods with specific nomenclature of Spring-JPA. To expose such APIs you will make Service layer which would simply call these repository methods and finally Controllers which will expose endpoints by calling Service layer.
What Spring did here, allow you to expose these endpoints from such interfaces (repositories) which are generally GET calls to search entity and in background generates necessary files to create final endpoints. So if you are using #RepositoryRestResource then there is no need to make Service/Controller layer.
On the other hand #RestController is a controller that specifically deals with json data and rest work as a controller. In short #Controller + #ResponseBody = #RestController.
Hope this helps.
See my working example and blog for the same:
http://sv-technical.blogspot.com/2015/11/spring-boot-and-repositoryrestresource.html
https://github.com/svermaji/Spring-boot-with-hibernate-no-controller
#RepositoryRestController override default generated Spring Data REST controllers from exposed repository.
To take advantage of Spring Data REST’s settings, message converters, exception handling, and more, use the #RepositoryRestController annotation instead of a standard Spring MVC #Controller or #RestController
E.g this controllers use spring.data.rest.basePath Spring Boot setting as base path for routing.
See Overriding Spring Data REST Response Handlers.
Be aware of adding #ResponseBody as it is missed in #RepositoryRestController
If you not exposed repository (marked as #RepositoryRestResource(exported = false)), use #BasePathAwareController annotation instead
Also be aware of bags
ControllerLinkBuilder does not take Spring Data REST's base path into account and #RequestMapping shouldn't be used on class/type level
and
Base path doesn't show up in HAL
Workaround to fix link: https://stackoverflow.com/a/51736503/548473
UPDATE: at last I prefer not to use #RepositoryRestController due to lot of workarounds.
The crux of my problem is how to type a property to expose the methods from two different interfaces, when the property will be injected with a Spring Batch proxy.
My design problem is concerning how to inject a dependency that Spring has proxied for use in Spring Batch. The dependency is proxied by Spring because it has been scoped to the Spring Batch step. This is required because the dependency has values that must be injected from the Spring Batch Job Parameters. Scoping the bean to the Spring Batch step is the best way to make the dependency available in the ExecutionContext so that the Job Parameters can be injected. Any input in how to resolve this would be very much appreciated.
My specific situation is in implementing an AggregateItemReader in Spring Batch as explained in the Spring Batch Samples. You can find a the source for the example AggregateItemReader here and the javadoc.
The difference is my implementation intends to use the JdbcCursorItemReader to aggregate from. This means that my AggregateItemReader must implement the interfaces ItemReader and ItemStream. This allows my AggregateItemReader to be used properly by Spring Batch by exposing the open(ExecutionContext executionContext), update(ExecutionContext executionContext), and close() methods needed by Spring Batch to manage the ItemStream. In addition, the property in my AggregateItemReader that stores a reference to the ItemReader that handles the data loading must provide access to these same methods. So the property is typed to the JdbcCursorItemReader. This introduces a problem as the proxy Spring Batch creates is baed on the dependency's interfaces, ItemReader and ItemStream. A proxy of these types can't be injected into a property typed to JdbcCursorItemReader.