Spring Data REST: Infer owner from authenticated request - spring

I have an #Entity that has a field owner:
#Entity
#EntityListeners({TalkListener.class})
public class Talk {
private #ManyToOne(targetEntity = User.class, optional = false) User owner;
// ...
}
Note: I have a listener in place that implementes #PrePersist. From what I have read it is discouraged to lookup request parameters from there (maybe it is also impossible, I didn't go on researching in this direction). There is the section about events in the docs, but it also seems purely entity-related and not taking the context of the request into account.
I would like to infer the owner from the authenticated user that POSTs the request. What is the easiest way to do so? If it is necessary to override the POST a snippet or linked example would be much appreciated.
Update 1:
#NeilMcGuigan suggested using Spring Data JPA's auditing features, which include a #CreatedBy annotation that would clearly solve the original question (and I would accept it as an answer).
Update 2:
A good tutorial about auditing with Spring Data JPA as well as an alternative of getting the principal within entity lifecycle events can be found here.
For educational purpose, what would be another way if I needed to access some value from the request to populate a value in my entity (say my current use case wasn't #CreatedBy but something different)?

Related

Spring Data problem - derived delete doesn't work

I have a spring boot application (based off spring-boot-starter-data-jpa. I have an absolute minimum of configuration going on, and only a single table and entity.
I'm using CrudRepository<Long, MyEntity> with a couple of findBy methods which all work. And I have a derived deleteBy method - which doesn't work. The signature is simply:
public interface MyEntityRepository<Long, MyEntity> extends CrudRespository<> {
Long deleteBySystemId(String systemId);
// findBy methods left out
}
The entity is simple, too:
#Entity #Table(name="MyEntityTable")
public class MyEntity {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(name="MyEntityPID")
private Long MyEntityPID;
#Column(name="SystemId")
private String systemId;
#Column(name="PersonIdentifier")
private String personIdentifier;
// Getters and setters here, also hashCode & equals.
}
The reason the deleteBy method isn't working is because it seems to only issue a "select" statement to the database, which selects all the MyEntity rows which has a SystemId with the value I specify. Using my mysql global log I have captured the actual, physical sql and issued it manually on the database, and verified that it returns a large number of rows.
So Spring, or rather Hibernate, is trying to select the rows it has to delete, but it never actually issues a DELETE FROM statement.
According to a note on Baeldung this select statement is normal, in the sense that Hibernate will first select all rows that it intends to delete, then issue delete statements for each of them.
Does anyone know why this derived deleteBy method would not be working? I have #TransactionManagementEnabled on my #Configuration, and the method calling is #Transactional. The mysql log shows that spring sets autocommit=0 so it seems like transactions are properly enabled.
I have worked around this issue by manually annotating the derived delete method this way:
public interface MyEntityRepository<Long, MyEntity> extends CrudRespository<> {
#Modifying
#Query("DELETE FROM MyEntity m where m.systemId=:systemId")
Long deleteBySystemId(#Param("systemId") String systemId);
// findBy methods left out
}
This works. Including transactions. But this just shouldn't have to be, I shouldn't need to add that Query annotation.
Here is a person who has the exact same problem as I do. However the Spring developers were quick to wash their hands and write it off as a Hibernate problem so no solution or explanation to be found there.
Oh, for reference I'm using Spring Boot 2.2.9.
tl;dr
It's all in the reference documentation. That's the way JPA works. (Me rubbing hands washing.)
Details
The two methods do two different things: Long deleteBySystemId(String systemId); loads the entity by the given constraints and ends up issuing EntityManager.delete(…) which the persistence provider is about to delay until transaction commits. I.e. code following that call is not guaranteed that the changes have already been synced to the database. That in turn is due to JPA allowing its implementations to actually do just that. Unfortunately that's nothing Spring Data can fix on top of that. (More rubbing, more washing, plus a bit of soap.)
The reference documentation justifies that behavior with the need for the EntityManager (again a JPA abstraction, not something Spring Data has anything to do with) to trigger lifecycle events like #PreDelete etc. which users expect to fire.
The second method declaring a modifying query manually is declaring a query to be executed in the database, which means that entity lifecycles do not fire as the entities do not get materialized upfront.
However the Spring developers were quick to wash their hands and write it off as a Hibernate problem so no solution or explanation to be found there.
There's detailed explanation why it works the way it works in the comments to the ticket. There are solutions provided even. Workarounds and suggestions to bring this up with the part of the stack that has control over this behavior. (Shuts faucet, reaches for a towel.)

Spring data JPA Checkmarx vulnerability- Improper Resource Access Authorization for #Query annotation

We are currently working on web application with persistence layer implemented using Spring data JPA and its working out really well for us however while scanning our code using checkmarx it complains for "Improper Resource Access Authorization" error for all input parameter in below code snippet.Not sure how to resolve it.Based of my understanding we tried following approach but that didn't help either
Whitelist input parameter using using #valid and #Pattern
annotations
Secure method using #Secured("ROLE_TEST") annotation of spring security.
#Repository
public interface EmployeeAddressRepository extends JpaRepository<EmployeeAddress, Integer> {
#Query("select empAdd from EmployeeAddress empAdd where empAdd.Employee.employeeId=?1 and (endDate) ORDER BY empAdd.lastUpdateTimeStamp DESC")
List<EmployeeAddress> findEmployeeAddressByEmployeeId(String employeeId, LocalDate date) throws PersistenceException;
}
Looking forward for any pointer here to move forward in right direction
In the comments for one of the other answers someone provided the answer. Essentially Checkmarx is unable to determine if you are checking if the user/service has permission to execute this command.
A secure implementation would look like:
if(userCanPerformAction(employeeId)){
repository.findEmployeeAddressByEmployeeId(employeeId, date)
}
It's not smart enough to know if your code prior to the call to the repository has actually performed the checks needed. So, what you have to do is verify that you are doing the correct validation checks before executing findEmployeeAddressByEmployeeId. If you are, then you would follow your organizations process for marking something as a false positive.
Perhaps Checkmarx doesn't support ordinal parameters notation, try rewriting the query like so:
#Query("select empAdd from EmployeeAddress empAdd where empAdd.Employee.employeeId= :empId and (endDate) ORDER BY empAdd.lastUpdateTimeStamp DESC", employeeIdParameter)
where employeeIdParameter is the input parameter.
Hope this helps,
Amit

Spring - RESTful provide different entity representations

In advance, I'm not speaking of Content Negotiation. Let's assume I've a simple JPA entity, by the way it is convertible with a related DTO it doesn't matter.
#Entity
public class User {
...
private String email;
private String password;
...
}
I've a RESTful controller with two different routes, a secured one and a public one.
#RestController
public class UserController {
...
#GetMapping("/public")
private User publicRoute() {
return service.getLatestUser();
}
#Secured("...")
#GetMapping("/private")
private User privateRoute() {
return service.getLatestUser();
}
}
For both routes the same entity is returned, but in the first case a public representation, let's say for a user profile, without sensitive stuff like E-Mail and Password should be returned. However in the second case a private representation, let's say for the owner itself, is required.
Is there any elegant way for doing this? I tried it on JSON level with #JsonIgnore but it doesn't worked for me. Also I tried to use Response-Objects, but it results in a lot of boilerplate code! Any suggestions?
See Also:
Recommended by Ananthapadmanabhan there already exists some questions/resources about this topic:
Spring REST webservice serializing to multiple JSON formats
How do I serialize using two different getters based on JsonView in RestController?
You could have different DTO objects being returned from the two endpoints instead of returning the same Entity class, that way you can have control over which attributes should be there in the response.
Read here about the advantages of using a DTO .
Another approach that you could make is to have custom serializers and deserializers for your endpoint.
You could read here for more details.
And here
Ignore dto fields while sending back to controller.
you can write you own method if your object is not final
private User ignoreEmailAndPass(User user){User usr=new User();usr.setName();//send only required fields.}
from Question:
In the database table you can have two roles
Say like User and Owner
3.In the service,check if it is user or owner and get the required details then have the
two DTOs,for each of their information that you want to send,set the info and return.
Or have a Common DTO, conataining all the information and when want to send user info just ignore the other info{Subset} else all.
Tell me what do you think of this solution?

Relax Security for a Spring Data REST Projection

I have a User class and I want to authorize access such that only a user gets to see what he is entitled to.
This was easily achievable using Spring Security in conjunction with Spring Data Rest where in JPA Repository I did below -
public interface UserRepository extends JPARepository<User,Integer> {
#PreAuthorize("hasRole('LOGGED_IN') and principal.user.id == #id")
User findOne(#Param("id") Integer id);
}
In this way, a user when visits to Spring Data REST scaffolded URLs like -
/users/{id}
/users/{id}/userPosts
Only those logged in with {id} get to see these and everyone else gets 401 like I would have wanted.
My problem is that I have one of Projections which is a public view of each user and I am crating it using Spring Data Rest projections as below which I want to be accessible for every {id}
#Projection(name = "details", types = User.class)
public interface UserDetailsProjection {
..
}
So, /users/{id1}?projection=details as well as /users/{id2}?projection=details should give 200 OK and show data even though user is logged in by {id1}
I began implementing this by marking projection with #PreAuthorize("permitAll") but that won't work since Repository has harder security check. Can we have this functionality where for a projection we can relax security ?
I am using latest Spring Data Rest and Spring Security distributions
Seems reasonable to add a custom controller for this use-case.
Please also consider:
Evaluate access in projections using #Value annotations
Add another entity for the same database data but with different field set for read-only operations, e.g. using inheritance (be careful with caching, etc.) - depends on your data storage type
Modify model to split User entity into two different entities (profile, account) since they seem to have different access and possibly even operations
You can also add a ResourceProcessor<UserSummaryProjection> to evaluate access programmatically and replace resource content (projection) with a DTO
Example of evaluating access in projections with #Value annotations:
#Projection(types = User.class, name = "summary")
public interface UserSummaryProjection {
#Value("#{#userSecurity.canReadEmail(target) ? target.email: null}")
String getEmail();
}
Added spring security code in the data access layer is not a good idea. I would suggest you to add the #PreAuthorize annotation to the controller/service method. Since you have a query parameter, ?projection=details, you can have separate controller/service method for the details projection.
Add following to your details projection method:
#RequestMapping("/url", params = {"projection"})
#PreAuthorize("hasRole('LOGGED_IN') and principal.user.id == #id")

Neo4j Spring data POC for social RESTful layer

Starting to work on a new project... RESTful layer providing services for social network platform.
Neo4j was my obvious choice for main data store, I had the chance to work with Neo before but without exploiting Spring Data abilities to map POJO to node which seems very convenient.
Goals:
The layer should provide support resemble to Facebook Graph API, which defines for each entity/object related properties & connections which can be refer from the URL. FB Graph API
If possible I want to avoid transfer objects which will be serialized to/from domain entities and use my domain pojo's as the JSON's transferred to/from the client.
Examples:
HTTP GET /profile/{id}/?fields=...&connections=... the response will be Profile object contains the requested in the URL.
HTTP GET /profile/{id}/stories/?fields=..&connections=...&page=..&sort=... the response will be list of Story objects according to the requested.
Relevant Versions:
Spring Framework 3.1.2
Spring Data Neo4j 2.1.0.RC3
Spring Data Mongodb 1.1.0.RC1
AspectJ 1.6.12
Jackson 1.8.5
To make it simple we have Profile,Story nodes and Role relationship between them.
public abstract class GraphEntity {
#GraphId
protected Long id;
}
Profile Node
#NodeEntity
#Configurable
public class Profile extends GraphEntity {
// Profile fields
private String firstName;
private String lastName;
// Profile connections
#RelatedTo(type = "FOLLOW", direction = Direction.OUTGOING)
private Set<Profile> followThem;
#RelatedTo(type = "BOOKMARK", direction = Direction.OUTGOING)
private Set<Story> bookmarks;
#Query("START profile=node({self}) match profile-[r:ROLE]->story where r.role = FOUNDER and story.status = PUBLIC")
private Iterable<Story> published;
}
Story Node
#NodeEntity
#Configurable
public class Story extends GraphEntity {
// Story fields
private String title;
private StoryStatusEnum status = StoryStatusEnum.PRIVATE;
// Story connections
#RelatedToVia(type = "ROLE", elementClass = Role.class, direction = Direction.INCOMING)
private Set<Role> roles;
}
Role Relationship
#RelationshipEntity(type = "ROLE")
public class Role extends GraphEntity {
#StartNode
private Profile profile;
#EndNode
private Story story;
private StoryRoleEnum role;
}
At first I didn't use AspectJ support, but I find it very useful for my use-case cause it is generating a divider between the POJO to the actual node therefore I can request easily properties/connections according to the requests and the Domain Driven Design Approach seems very nice.
Question 1 - AspectJ:
Let's say I want to define default fields for an object, these fields will be returned to the client whether if requested in the URL or not...so I have tried #FETCH annotation on these fields but it seems it is not working when using AspectJ.
At the moment I do it that way..
public Profile(Node n) {
setPersistentState(n);
this.id = getId();
this.firstName = getFirstName();
this.lastName = getLastName();
}
Is it the right approach to achieve that? does the #FETCH annotation should be supported even when using AspectJ? I will be happy to get examples/blogs talking about AspectJ + Neo4j didn't find almost anything....
Question 2 - Pagination:
I would like to support pagination when requesting for specific connection for example
/profile/{id}/stories/ , if stories related as below
// inside profile node
#RelatedTo(type = "BOOKMARK", direction = Direction.OUTGOING)
private Set<Story> bookmarks;
/profile/{id}/stories/ ,if stories related as below
// inside profile node
#Query("START profile=node({self}) match profile-[r:ROLE]->story where r.role = FOUNDER and story.status = PUBLIC")
private Iterable<Story> published;
Is pagination is supported out of the box with either #Query || #RelatedTo || #RelatedToVia using Pageable interface to retrieve Page instead of Set/List/Iterable? the limit and the sorting should be dynamic depending on the request from the client... I can achieve that using Cypher Query DSL but prefer to use the basic.. other approaches will be accepted happily.
Question 3 - #Query with {self}:
Kind of silly question but I can't help it :), it seems that when using #Query inside the node entity ( using {self} parameter } the return type must be Iterable which make sense..
lets take the example of...
// inside profile node
#Query("START profile=node({self}) match profile-[r:ROLE]->story where r.role = FOUNDER and story.status = PUBLIC")
private Iterable<Story> published;
When published connection is requested:
// retrieving the context profile
Profile profile = profileRepo.findOne(id);
// getting the publishe stories using AspectJ - will redirect to the backed node
Iterable<Story> published = profile.getPublished();
// set the result into the domain object - will throw exception of read only because the type is Iterable
profile.setPublished(published);
Is there a workaround for that? which is not creating another property which will be #Transiant inside Profile..
Question 4 - Recursive relations:
I am having some problems with transitive / recursive relations, when assigning new Profile Role in Story the relation entity role contain #EndNode story , which contain roles connection...and one of them is the context role above and it is never end :)...
Is there a way to configure the spring data engine not to create these never ending relations?
Question 5 - Transactions:
Maybe I should have mentioned it before but I am using the REST server for the Neo4j DB, from previous reading I understand that there is not support out-of-the-box in transactions? like when using the Embedded server
I have the following code...
Profile newProfile = new Profile();
newProfile.getFollowThem().add(otherProfile);
newProfile.getBookmarks().add(otherStory);
newProfile.persist(); // or profileRepo.save(newProfile)
will this run in transaction when using REST server? there are few operations here, if one fail all fail?
Question 6 - Mongo + Neo4j:
I need to store data which don't have relational nature.. like Feeds, Comments , Massages. I thought about an integration with MongoDB to store these.. can I split domain pojo fields/connections to both mongo/neo4j with cross-store support? will it support AspectJ?
That is it for now.... any comments regarding any approach I presented above will be welcome.. thank you.
Starting to answer, by no means complete:
Perhaps upgrade to the the .RELEASE versions?
Question 1
If you want to serialize AspectJ entities to JSON you have to exclude the internal fields generated by the advanced mapping (see this forum discussion).
When you use the Advanced Mapping #Fetch is not necessary as the data is read-through from the database anyway.
Question 2
For the pagination for fields, you can try to use a cypher-query with #Query and LIMIT 100 SKIP 10 as a fixed parameter. Otherwise you could employ a repository/template to actually fill a Collection in a field of your entity with the paged information.
Question 3
I don't think that the return-type of an #Query has to be an Iterable it should also work with other types (Collections or concrete types). What is the issue you run into?
For creating recursive relationships - try to store the relationship-objects themselves first and only then the node-entities. Or use template.createRelationshipBetween(start, end, type, allowDuplicates) for creating the relationships.
Question 5
As you are using SDN over REST it might not perform very well, as right now the underlying implementation uses the RestGraphDatabase for fine-grained operations and the advanced mapping uses very fine grained calls. Is there any reason why you don't want to use the embedded mode? Against a REST server I would most certainly use the simple-mapping and try to handle read operations mostly with cypher.
With the REST APi there is only one tx per http-call the only option of having larger transactions is to use the rest-batch-api.
There is a pseudo-transaction support in the underlying rest-graph-database which batches calls issued within a "transaction" to be executed in one batch-rest-request. But those calls must not rely on read-results during the tx, those will only be populated after the tx has finished. There were also some issues using this approach with SDN so I disabled it for that (it is a config-option/system-property for the rest-graphdb).
Question 6
Right now cross-store support for both MongoDB and Neo4j is just used against a JPA / relational store. We discussed having cross-store references between the spring-data projects once but didn't follow up on this.

Resources