Disable Write operation in ReactiveCrudRepository - spring

In my case, due to some reason, I need to read data from the tables owned by different reason. In my application, I want to disable write on the table. One solution I think is to inherit ReactiveCrudRepository and create an interface which throws exception on write/update operation.
Is there any other clean and maintainable method available for such use-cases?

I see two options here:
First option
Create a read-only user (which has permissions only for reading) in DB and connect to DB from your application with this user.
Second option
If the first option is not suitable in your case, then:
Create a generic repository which has only methods for reading like this: (extend it from Repository as it does not contain any method)
#NoRepositoryBean
public interface ReadOnlyReactiveRepository<T, ID> extends Repository<T, ID> {
Flux<T> findAll();
Mono<T> findById(ID id);
... any methods you want
}
Note: Dont forget to annotate this generic repository with #NoRepositoryBean which is used to avoid creating repository proxies for interfaces that actually match the criteria of a repo interface but are not intended to be one.
And extend your domain repository from that repository, not from ReactiveCrudRepository like this:
public interface YourDomainRepository extends ReadOnlyReactiveRepository<YourDomain, String> {
}

Related

Spring Data REST: Disallow create POST method but allow update methods PUT/PATCH

I have a rest repository
#RepositoryRestResource(..)
public interface RestEntityRepository extends MongoRepository<Entity, String> {
}
There I want to disable create method. I know that it's possible with #RestResource(exporeted = false) annotation.
We have two types of methods save and insert, but as I understood insert internally uses save. But if I put it for all methods save/insert, update doesn't work, if I put only for insert methods create still works.
Is there some way to do it?
We had similar discussion within team and ended up extending SimpleMongoRepository.
public class MyGenericMongoRepositoryImpl<T, ID extends Serializable> extends SimpleMongoRepository<T, ID> implements MyGenericMongoRepository<T, ID> {}
We had plenty of reasons to do so. Custom batch sizing etc.

#PreAuthorize on JpaRepository

I am looking to implement role based security for my REST service. I am using spring-data-rest and have configured a JpaRepository as such:
#Repository
#RestResource(path = "changesets", rel = "changesets")
public interface ChangesetRepository extends JpaRepository<Changeset, Long> { }
I would like to attach a #PreAuthorize annotation to the inherited Page<T> findAll(Pageable pageable) method so that a GET requires a specific role.
Is there a way to do that? Do I need to provide a custom implementation or am I missing something obvious?
You can add your own parent class for all repositories (see how to do it in the documentation). Then just add all necessary annotations and your security restrictions will be applied for all child beans.
From the architecture point of view most of the time a Repository is not the right place to apply your security restrictions. Your service layer is much more appropriate place (because your security restrictions depend on your business actions and not on your data loading logic). Consider following example: you want to reuse the same Repository in many Services, and security rules are not the same (for these Services). What to do?
If you are using Spring Data REST your idea is reasonable. In your interface class just redefine the same method and add #PreAuthorize annotation to it. Code should be like this, though I didn't test it
#Repository
#RestResource(path = "changesets", rel = "changesets")
public interface ChangesetRepository extends JpaRepository<Changeset, Long> {
#PreAuthorize("#pk == authentication.id")
Page<Changeset> findAll(Pageable pageable);
}

Spring with Neo4j, GraphRepository<?> vs handmade interface

I found out that there is an interface called GraphRepository. I have a repository for users implementing a homemade interface that does its job, but I was wondering, shouldn't I implement GraphRepository instead ? Even if it will be quite long to implement and some methods will be useless, I think it is a standard and I already re-coded a lot of methods that are defined in this interface.
So should I write "YAGNI" code or not respect the standard ?
What is your advice ?
you don't need to actually implement GraphRepository but extend it. the principals of Spring-Data is that all the boiler-plate CRUD code is taken care of (by proxying at startup time) so all you have to do is create an interface for your specific entity extending GraphRepository and then add only specific methods that you require.
for example; if i have an entity CustomerNode, to create standard CRUD methods, i can create a new interface CustomerNodeRepository extends GraphRepository<CustomerNode,Long>. all the methods from GraphRepository (e.g. save, findAll, findOne, delete, deleteAll, etc.) are now accessible from CustomerNodeRepository and implemented by Spring-Data-Neo4J without having to write a single line of implementation code.
the pattern now allows you to work on your specific repository code (e.g. findByNameAndDateOfBirth) rather than the simple CRUD stuff.
Spring-Data package is very useful for repository interaction. it can reduce huge amounts of code (have seen 80%+ reduction in code lines) and would highly recommend using it
edit: implementing custom execution
if you want to add your own custom behavior to a Repository method, you create the concept of merging interfaces and custom implementation. for example, lets say i want to create a method called findCustomerNodeBySomeStrangeCriteria and to do this, i actually want to link off to a relational database to perform the function.
first we define a separate, standalone interface that only includes our 'extra' method.
public interface CustomCustomerNodeRepository {
List<CustomerNode> findCustomerNodeBySomeStrangeCriteria(Object strangeCriteria);
}
next we update our normal interface to extend not only GraphRepository, but our new custom one too
public interface CustomerNodeRepository extends GraphRepository<CustomerNode,Long>, CustomCustomerNodeRepository {
}
the last piece, is to actually implement our findCustomerNodeBySomeStrangeCriteria method
public class CustomerNodeRepositoryImpl implements CustomCustomerNodeRepository {
public List<CustomerNode> findCustomerNodeBySomeStrangeCriteria(Object criteria) {
//implementation code
}
}
so, there's a couple of points to note;
we create a separate interface to define any custom methods that have custom implementations (as distinct from Spring-Data compatible "findBy..." methods)
our CustomerNodeRepository interface (our 'main' interface) extends both the GraphRepository and our 'custom' one
we implement only the 'custom' method in a class that implements only the custom interface
the 'custom' implementation class must (by default) be called our 'main' interface Impl to be picked up by Spring Data (so in this case CustomNodeRepositoryImpl)
under the covers, Spring Data delivers a proxy implementation of CustomerNodeRepository as a merge of the auto-built GraphRepository and our class implementing CustomCustomerNodeRepository. the reason for the name of the class is to allow Spring Data to pick it up easily/successfully (this can be overwritten so it doesn't look for *Impl)

Confused about Spring-Data DDD repository pattern

I don't know so much about DDD repository pattern but the implementation in Spring is confusion me.
public interface PersonRepository extends JpaRepository<Person, Long> { … }
As the interface extends JpaRepository (or MongoDBRepository...), if you change from one db to another, you have to change the interface as well.
For me an interface is there to provide some abstraction, but here it's not so much abstract...
Do you know why Spring-Data works like that?
You are right, an Interface is an abstraction about something that works equals for all implementing classes, from an outside point of view.
And that is exactly what happens here:
JpaRepository is a common view of all your JPA Repositories (for all the different Entities), while MongoDBRepository is the same for all MongoDB Entities.
But JpaRepository and MongoDBRepository have nothing in common, except the stuff that is defined in there common super Interfaces:
org.springframework.data.repository.PagingAndSortingRepository
org.springframework.data.repository.Repository
So for me it looks normal.
If you use the classes that implement your Repository then use PagingAndSortingRepository or Repository if you want to be able to switch from an JPA implementation to an Document based implementation (sorry but I can not imagine such a use case - anyway). And of course your Repository implementation should implement the correct interface (JpaRepository, MongoDBRepository) depending on what it is.
The reasoning behind this is pretty clearly stated in this blog post http://blog.springsource.com/2011/02/10/getting-started-with-spring-data-jpa/.
Defining this interface serves two purposes: First, by extending JpaRepository we get a bunch of generic CRUD methods into our type that allows saving Accounts, deleting them and so on. Second, this will allow the Spring Data JPA repository infrastructure to scan the classpath for this interface and create a Spring bean for it.
If you do not trust sources so close to the source (pun intended) it might be a good idea to read this post as well http://www.brucephillips.name/blog/index.cfm/2011/3/25/Using-Spring-Data-JPA-To-Reduced-Data-Access-Coding.
What I did not need to code is an implementation of the PersonRepository interface. Spring will create an implementation of this interface and make a PersonRepository bean available to be autowired into my Service class. The PersonRepository bean will have all the standard CRUD methods (which will be transactional) and return Person objects or collection of Person objects. So by using Spring Data JPA, I've saved writing my own implementation class.
Until M2 of Spring Data we required users to extend JpaRepository due to the following reasons:
The classpath scanning infrastructure only picked up interfaces extending that interface as one might use Spring Data JPA and Spring Data Mongo in parallel and have both of them pointed to the very same package it would not be clear which store to create the proxy for. However since RC1 we simply leave that burden to the developer as we think it's a rather exotic case and the benefit of just using Repository, CrudRepository or the like outweights the effort you have to take in the just described corner case. You can use exclude and include elements in the namespace to gain finer-grained control over this.
Until M2 we had transactionality applied to the CRUD methods by redeclaring the CRUD methods and annotating them with #Transactional. This decision in turn was driven by the algorithm AnnotationTransactionAttributeSource uses to find transaction configuration. As we wanted to provide the user with the possibility to reconfigure transactions by just redeclaring a CRUD method in the concrete repository interface and applying #Transactional on it. For RC1 we decided to implement a custom TransactionAttributeSource to be able to move the annotations back to the repository CRUD implementation.
Long story short, here's what it boils down to:
As of RC1 there's no need to extend the store specific repository interface anymore, except you want to…
Use List-based access to findAll(…) instead of the Iterable-based one in the more core repository interfaces (allthough you could simply redeclare the relevant methods in a common base interface to return Lists as well)
You want to make use of the JPA-specific methods like saveAndFlush(…) and so on.
Generally you are much more flexible regarding the exposure of CRUD methods since RC1 as you can even only extend the Repository marker interface and selectively add the CRUD methods you want to expose. As the backing implementation will still implement all of the methods of PagingAndSortingRepository we can still route the calls to the instance:
public interface MyBaseRepository<T, ID extends Serializable> extends Repository<T, ID> {
List<T> findAll();
T findOne(ID id);
}
public interface UserRepository extends MyBaseRepository<User, Long> {
List<T> findByUsername(String username);
}
In that example we define MyBaseRepository to only expose findAll() and findOne(…) (which will be routed into the instance implementing the CRUD methods) and the concrete repository adding a finder method to the two CRUD ones.
For more details on that topic please consult the reference documentation.

LINQ-to-XYZ polymorphism?

I have a situation where a client is requiring that we implement our data access code to use either an Oracle or SQL server database based on a runtime configuration settings. The production environment uses Oracle but both dev and QA are running against a SQL Server instance.
(I don't have any control over this or have any background on why this is the case other than Oracle is their BI platform and dev wants to work with SQL Server.)
Their request is to use LINQ-to-SQL / LINQ-to-Oracle for all data access. They will need to support the application and do not have the knowledge to jump into EF yet (their requirement) - although I believe the same problem exists if we use EF.
While I can implement LINQ to XYZ classes for both databases so that I can connect to both, they don't share a common interface (other than DataContext) so I really can't code against an interface and plug the actual implementation in at runtime.
Any ideas how I should approach this?
UPDATE
After writing this post, I did a little investigating into EF and it appears to me that this same problem exists if I use EF - which would be my long term goal.
Just a quick thought. Use MEF framework and plug your DAL layers to it. Then based on the environment(dev, production, QA) you can switch to the various DAL layers(Oracle, SQL etc.).
If you want to know about MEF , here is a quick intro.
Also sometime back I have seen a Generic Data Access Framework by Joydip Kanjilal. You can even have a look into that.
What you have to do is encapsulate the ORM datacontext in an interface of your creation, like IDataContext.
Then share this interface between all DALs and implement it. How you will plug it in is just your preference, using MEF as suggested or a IoC container.
For the sake of closure on this topic, here is what I ended up doing:
I implemented a combination of the Unit of Work and Repository patterns. The Unit of Work class is what consuming code works with and exposes all of the operations that can be performed on my root entities. There is one UoW per root entity. The UoW makes use of a repository class via an interface. The actual implementation of the repository is dependent on the data access technology being used.
So, for instance, if I have a customer entity and I need to support retrieving and updating each record, I would have something like:
public interface ICustomerManager
{
ICustomer GetCustomer(Guid customerId);
void SaveCustomer(ICustomer customer);
}
public class CustomerManager : ICustomerManager
{
public CustomerManager(ICustomerRepository repository)
{
Repository = repository;
}
public ICustomerRepository Repository { get; private set; }
public ICustomer GetCustomer(Guid customerId)
{
return Repository.SingleOrDefault(c => c.ID == customerId);
}
public void SaveCustomer(ICustomer customer)
{
Repository.Save(customer);
}
}
public interface ICustomerRepository : IQueryable<ICustomer>
{
void Save(ICustomer customer);
}
I'm using an Inversion of Control framework to inject the ICustomerRepository implementation into the CustomerManager class at runtime. The implementation class will be in a separate assembly that can be swapped out as the data access technology is changed. All we are concerned about is that the repository implements each method using the contract defined above.
As a side note, to do this with Linq-to-SQL, I simply created a LinqCustomerRepository class that implements ICustomerRepository and added a partial class for the generated Customer entity class that implements ICustomer. Then I can return the L2S entity from the repository as the implementation of the ICustomer interface for the UoW and calling code to work with and they'll be none the wiser that the entity originated from L2S code.

Resources