Tenant Id filter where Multi-tenancy in single database - spring

I'm using JDBCTemplate and not using Hibernate and running native SQL queries.
I need to attach/append tenant id to any query which is being executed.
For multiple database i come across this - https://github.com/openMF/mifosx-admin/blob/master/src/main/java/org/mifosx/admin/domain/BaseDAO.java
Can someone help me with suggestion or comments how to attach the tenant id dynamically as jdbc interceptor or apply filter for queries?
Currently all queries goes like select * from...where tenant id = test
Thanks.

Store the tenant id as soon as you can determine it's value, in a thread-local, perhaps facilitated by a servlet filter.
If all such entities implement an interface which exposes a 'tenantId' property, Then update this property in your entity objects, from the above thread-local tenantId, in your BaseDAO class.
By way of example, you could create a singleton which keeps a threadlocal copy of your tenant id, assuming that it is an Integer. Here's one way to do this.
public enum ThreadState {
INSTANCE
;
private ThreadLocal<Integer> tenantId = new ThreadLocal<>();
public void setTenantId(Integer tid) {
tenantId.set(tid);
}
public Integer getTenantId() {
return tenantId.get();
}
}
Then, in that place in your code where you determine what the tenant ID is for the given request, stash it into our new threadlocal as follows:
ThreadState.INSTANCE.setTenantId(tenantId);
And finally, in your DAO class where you are formulating a query and need to access the tenant ID write the following:
Integer tenantId = ThreadState.INSTANCE.getTenantId()
At this point you can use the tenantId when formulating your query, or update a new entity object prior to storing it.

Related

How to access PostgreSQL RETURNING value in Spring Boot DAO?

I want to return the auto-generated id of entity. PostgeSQL is able to automaticaly selects certain column via RETURNING, but I have a hard time trying to find how to retrieve this value in Spring Boot.
I would want something like:
public int createUser(User user) {
String sql = "INSERT INTO user (name, surname) VALUES (?,?) RETURNING id";
return jdbcTemplate.update(sql,
user.getName(),
user.getSurname(),
resultSet -> resultSet.getInt("id")
);
}
I know it's straightforward in Hibernate, then whether you use a Repository class or an EntityManager, the save method returns the saved entity, so you can just do:
int id = userRepository.save(user).getId();
Or is there a reason you want to persist it the way you do?

JpaRepository merge() method

I'm rewriting a big project with SpringBoot 2.2.6 and I'm run into a problem.
In the old project (pure ejb) when a complex entity is updated, the code build entity from DTO's as follows:
public Entity dtoToEntity(DTO dto) {
Entity entity = new Entity();
entity.setId(dto.getID());
// ecc...
// ecc...
entity.setSubEntity(dto.getSubEntity() != null ? new SubEntity(dto.getSubEntity().getId() : null);
// and so on
}
The important section is that related to subentity! After made a mapping like that the old project calls:
EntityManager.merge(entity);
I think that with merge call, if inside the database exists a SubEntity with specified id and other fields valorized, the other fields remain valid and are not set to null because they aren't declared in mapping.
But with SpringBoot I'm using JpaRepository and I don't think the same thing happens if I call:
jpaRepository.save(entity);
I think with this call the other fields of SubEntity with specified id will set to null!
Is it correct?
What can I solve this?
Thanks for your reply first of all!
You are right, I can't do something like that nor with EntityManager.merge() method! Than I try to explain better what I want to do:
Suppose I have a complex Entity, which has many nested Entities (which may have nested entities) as follows:
#Entity
public class Car {
private String name;
....
....
private Engine engine; // nested entity
private Chassis chassis; // nested entity
}
And:
#Entity
public class Engine {
private String company;
private Oil oil; // nested entity
....
}
Now suppose in the database I have a Car, with all relationship filled (Engine, Chassis, Oil ecc..) and suppose I want to update the Car name from Ferrari to Fiat, if I use pure SQL I can simply write:
update Car c set c.name = "Fiat" where c.id = [id];
Now if I use Spring JPA, to ensure that all nested entity (and their field) are not setting to null when I update my entity I have to do:
Car car = carRepository.findById([id]);
car.setName("Fiat"):
carRepository.save(car);
This way I will update Car name and I'm sure that all other entities will remain set because are loaded by findById() method.
My question and my goal is to know if is there a way to do something like that:
Car car = new Car();
car.setId(1); // id of Ferrari car
car.setName("Fiat");
someRepositoryOrEntityManager.saveOrUpdate(car);
And preserve all other field and relation without load all of these by the find method (maybe due to a performance reasons).
Did you give it a try or it is just guesswork?
First of all, you don't need to embrace spring data repositories. You can inject EntityManager if it helps in the migration process.
Secondly, look at the implementation of SimpleJpaRepository.save
#Transactional
public <S extends T> S save(S entity) {
if (entityInformation.isNew(entity)) {
em.persist(entity);
return entity;
} else {
return em.merge(entity);
}
}
This means that JpaRepository.save calls em.merge if it concludes that the entity is not new.
The check if the entity is new is in AbstractEntityInformation.isNew. It concludes that the entity is new only if its id is null (or 0 for primitive numerical types).
You assign the id from the dto. If it is not null (or non-zero for primitives), there is no reason to believe that the new code will behave in a different way than the old one.
Answer for updated question
If you want to modify an entity without fetching it, I would suggest JPQL or criteria query
Reference:
More about whether an entity is new or not, can be found here.

Spring Boot - Change connection dynamically

I have a Spring Boot project with multiple databases of different years and these databases have same tables so the only difference is the year (..., DB2016, DB2017). In the controller of the application i need to return data that belong to "different" years. Moreover in future years other databases will be created (eg. in 2018 there's going to be a db named "DB2018"). So my problem is how to switch the connection among databases without creating a new datasource and a new repository every new year.
In an other question posted by me (Spring Boot - Same repository and same entity for different databases) the answer was to create different datasources and different repositories for every existing database, but in this case i want to return data from existing databases on the basis of the current year. More specifically:
SomeEntity.java
#Entity(name = "SOMETABLE")
public class SomeEntity implements Serializable {
#Id
#Column(name="ID", nullable=false)
private Integer id;
#Column(name="NAME")
private String name;
}
SomeRepository.java
public interface SomeRepository extends PagingAndSortingRepository<SomeEntity, Integer> {
#Query(nativeQuery= true, value = "SELECT * FROM SOMETABLE WHERE NAME = ?1")
List<SomeEntity> findByName(String name);
}
SomeController.java
#RequestMapping(value="/foo/{name}", method=RequestMethod.GET)
public ResponseEntity<List<SomeEntity>> findByName(#PathVariable("name") String name) {
List<SomeEntity> list = autowiredRepo.findByName(name);
return new ResponseEntity<List<SomeEntity>>(list,HttpStatus.OK);
}
application.properties
spring.datasource.url=jdbc:postgresql://localhost:5432/DB
spring.datasource.username=xxx
spring.datasource.password=xxx
So if the current year is 2017 i want something like this:
int currentyear = Calendar.getInstance().get(Calendar.YEAR);
int oldestDbYear = 2014;
List<SomeEntity> listToReturn = new LinkedList<SomeEntity>();
//the method getProperties is a custom method to get properties from a file
String url = getProperties("application.properties", "spring.datasource.url");
props.setProperty("user", getProperties("application.properties","spring.datasource.username"));
props.setProperty("password", getProperties("application.properties","spring.datasource.password"));
for (int i = currentYear, i>oldestDbYear, i--) {
//this is the connection that must be used by autowiredRepo Repository, but i don't know how to do this.
//So the repository uses different connection for every year.
Connection conn = getConnection(url+year,props);
List<SomeEntity> list_of_specific_year = autowiredRepo.findByName(name);
conn.close;
listToReturn.addAll(list_of_specific_year);
}
return listToReturn;
Hope everithing is clear
The thing that is probably most suitable to your needs here is Spring's AbstractRoutingDataSource. You do need to define multiple DataSources but you will only need a single repository. Multiple data sources is not an issue here as there is always a way to create the DataSource beans programatically at run time and register them with the application context.
How it works is you basically register a Map<Object, DataSource> inside your #Configuration class when creating your AbstractRoutingDataSource #Bean and in this case the lookup key would be the year.
Then you need create a class that implements AbstractRoutingDataSource and implement the determineCurrentLookupKey() method. Anytime a database call is made, this method is called in the current context to lookup which DataSource should be returned. In your case it sounds like you simply want to have the year as a #PathVariable in the URL and then as the implementation of determineCurrentLookupKey() grab that #PathVariable out of the URL (e.g in your controller you have mappings like #GetMapping("/{year}/foo/bar/baz")).
HttpServletRequest request = ((ServletRequestAttributes)RequestContextHolder
.getRequestAttributes()).getRequest();
HashMap templateVariables =
(HashMap)request.getAttribute(HandlerMapping.URI_TEMPLATE_VARIABLES_ATTRIBUTE);
return templateVariables.get("year");
I used this approach when writing a testing tool for a product where there were many instances running on multiple different servers and I wanted a unified programming model from my #Controllers but still wanted it to be hitting the right database for the server/deployment combination in the url. Worked like a charm.
The drawback if you are using Hibernate is that all connections will go through a single SessionFactory which will mean you can't take advantage of Hibernate's 2nd level caching as I understand it, but I guess that depends on your needs.

How do I migrate my JPA DAO to Spring Data with second level cache?

I have bunch of JPA DAOs im looking to migrate to Spring Data JPA. Some of my DAOS have second-level / query caching set up.
I have a process where I only retrieve the ID in my queries, and then look up the entity using findByID(). This way, only the id's are multiplied in the different query caches, and the entire entities are in the second level cache.
Example:
#NamedQuery(name = "SystemUser.findByEmail",
query = "SELECT u.id FROM SystemUser u WHERE email=:email"),
…
public SystemUser findByEmail(String email) {
TypedQuery<Long> q = getEntityManager().createNamedQuery("SystemUser.findByEmail", Long.class);
q.setParameter("email", email);
q.setHint("org.hibernate.cacheable", true);
q.setHint("org.hibernate.cacheRegion", "query.systemUser");
List<Long> res = q.getResultList();
if (res != null && res.size() > 0) {
return findById(res.get(0));
}
return null;
}
I have several more findBy…-methods, all doing it like this. It feels like a good way to keep cache memory consumption down.
I'm kind of new to the Spring Data JPA business, but I can't see how I would go about realizing this here? The #Cacheable annotations seems only to deal with query caches, which to me would duplicate the entities in each query cache?
Is there any way to do this with Spring Data? Pointers would be much appreciated.
In Spring Data JPA just create a findByEmail method and either Spring Data JPA will found your named query or create one itself.
public class SystemUserRepository extends CrudRepository<SystemUser, Long> {
SystemUser findByEmail(String email);
}
Should be all you need to get the query executed and the desired result. Now with the #QueryHints you can add the hints you are setting now.
public class SystemUserRepository extends CrudRepository<SystemUser, Long> {
#QueryHints(
#QueryHint(name="org.hibernate.cacheable", value="true"),
#QueryHint(name="org.hibernate.cacheRegion", value="query.systemUser") )
SystemUser findByEmail(String email);
}
The result will be cached and still the user will come from the 2nd level cache (if available, else created). Assuming of course your entity is #Cacheable.
A nice read on how the 2 different caches work (together) can be found here. A small snippet on how the query cache works.
The query cache looks conceptually like an hash map where the key is composed by the query text and the parameter values, and the value is a list of entity Id's that match the query:
If you want more complex logic (and really implement the optimization you did) you can always implement your own repository.

Avoid N+1 with DTO mapping on Hibernate entities

In our Restful application we decided to use DTO's to shield the Hibernate domain model for several reasons.
We map Hibernate entities to DTO and vice versa manually using DTOMappers in the Service Layer.
Example in Service Layer:
#Transactional(readOnly=true)
public PersonDTO findPersonWithInvoicesById(Long id) {
Person person = personRepository.findById(id);
return PersonMapperDTOFactory.getInstance().toDTO(person);
}
The main concept could be explained like this:
JSON (Jackson parser) <-> Controller <-> Service Layer (uses Mapping Layer) <-> Repository
We agreed that we retrieve associations by performing a HQL (or Criteria) using a left join.
This is mostly a performant way to retrieve relations and avoids the N+1 select issue.
However, it's still possible to have the N+1 select issue when a developer mistakenly forgets to do a left join. The relations will still be fetched because the PersonDTOMapper will iterate over the Invoices of a Person for converting to InvoiceDTOs. So the data is still fetched because the DTOMapper is executed where a Hibernate Session is active (managed by Spring)
Is there some way to make the Hibernate Session 'not active' in our DTOMappers? We would face a LazyInitializationException that should trigger the developer that he didn't fetch some data like it should.
I've read about #Transactional(propagation = Propagation.NOT_SUPPORTED) that suspends the transaction. However, I don't know that it was intended for such purposes.
What is a clean solution to achieve this? Alternatives are also very welcome!
Usually I use the mapper in the controller layer. From my prspective, the service layer manages the application business logic, dtos are very useful if you want to rapresent data to the external world in a different way. In this way you may get the lazy inizitalization excpetion you are looking for.
I have one more reason to prefer this solution: just image you need to invoke a public method inside a public method in the service class: in this case you might need to call the mapper several times.
If you are using Hibernate, then there are specific ways that you can determine if an associated object has been lazy-loaded.
For example, let's say you have an entity class Foo that contains a #ManyToOne 'foreign' association to entity class Bar which is represented by a field in Foo called bar.
In you DTO mapping code you can check if the associated bar has been lazy-loaded using the following code:
if (!(bar instanceof HibernateProxy) ||
!((HibernateProxy)bar).getHibernateLazyInitializer().isUninitialized()) {
// bar has already been lazy-loaded, so we can
// recursively load a BarDTO for the associated Bar object
}
The simplest solution to achieve what you desire is to clear the entity manager after querying and before invoking the DTO mapper. That way, the object will be detached and access to uninitialized assocations will trigger a LazyInitializationException instead.
I felt your pain as well which drove me to developing Blaze-Persistence Entity Views which allows you to define DTOs as interfaces and map to the entity model, using the attribute name as default mapping, which allows very simple looking mappings.
Here a little example
#Entity
class Person {
#Id Long id;
String name;
String lastName;
String address;
String city;
String zipCode;
}
#EntityView(Person.class)
interface PersonDTO {
#IdMapping Long getId();
String getName();
}
Querying would be as simple as
#Transactional(readOnly=true)
public PersonDTO findPersonWithInvoicesById(Long id) {
return personRepository.findById(id);
}
interface PersonRepository extends EntityViewRepository<PersonDTO, Long> {
PersonDTO findById(Long id);
}
Since you seem to be using Spring data, you will enjoy the spring data integration.

Resources