Spring Boot - Change connection dynamically - spring

I have a Spring Boot project with multiple databases of different years and these databases have same tables so the only difference is the year (..., DB2016, DB2017). In the controller of the application i need to return data that belong to "different" years. Moreover in future years other databases will be created (eg. in 2018 there's going to be a db named "DB2018"). So my problem is how to switch the connection among databases without creating a new datasource and a new repository every new year.
In an other question posted by me (Spring Boot - Same repository and same entity for different databases) the answer was to create different datasources and different repositories for every existing database, but in this case i want to return data from existing databases on the basis of the current year. More specifically:
SomeEntity.java
#Entity(name = "SOMETABLE")
public class SomeEntity implements Serializable {
#Id
#Column(name="ID", nullable=false)
private Integer id;
#Column(name="NAME")
private String name;
}
SomeRepository.java
public interface SomeRepository extends PagingAndSortingRepository<SomeEntity, Integer> {
#Query(nativeQuery= true, value = "SELECT * FROM SOMETABLE WHERE NAME = ?1")
List<SomeEntity> findByName(String name);
}
SomeController.java
#RequestMapping(value="/foo/{name}", method=RequestMethod.GET)
public ResponseEntity<List<SomeEntity>> findByName(#PathVariable("name") String name) {
List<SomeEntity> list = autowiredRepo.findByName(name);
return new ResponseEntity<List<SomeEntity>>(list,HttpStatus.OK);
}
application.properties
spring.datasource.url=jdbc:postgresql://localhost:5432/DB
spring.datasource.username=xxx
spring.datasource.password=xxx
So if the current year is 2017 i want something like this:
int currentyear = Calendar.getInstance().get(Calendar.YEAR);
int oldestDbYear = 2014;
List<SomeEntity> listToReturn = new LinkedList<SomeEntity>();
//the method getProperties is a custom method to get properties from a file
String url = getProperties("application.properties", "spring.datasource.url");
props.setProperty("user", getProperties("application.properties","spring.datasource.username"));
props.setProperty("password", getProperties("application.properties","spring.datasource.password"));
for (int i = currentYear, i>oldestDbYear, i--) {
//this is the connection that must be used by autowiredRepo Repository, but i don't know how to do this.
//So the repository uses different connection for every year.
Connection conn = getConnection(url+year,props);
List<SomeEntity> list_of_specific_year = autowiredRepo.findByName(name);
conn.close;
listToReturn.addAll(list_of_specific_year);
}
return listToReturn;
Hope everithing is clear

The thing that is probably most suitable to your needs here is Spring's AbstractRoutingDataSource. You do need to define multiple DataSources but you will only need a single repository. Multiple data sources is not an issue here as there is always a way to create the DataSource beans programatically at run time and register them with the application context.
How it works is you basically register a Map<Object, DataSource> inside your #Configuration class when creating your AbstractRoutingDataSource #Bean and in this case the lookup key would be the year.
Then you need create a class that implements AbstractRoutingDataSource and implement the determineCurrentLookupKey() method. Anytime a database call is made, this method is called in the current context to lookup which DataSource should be returned. In your case it sounds like you simply want to have the year as a #PathVariable in the URL and then as the implementation of determineCurrentLookupKey() grab that #PathVariable out of the URL (e.g in your controller you have mappings like #GetMapping("/{year}/foo/bar/baz")).
HttpServletRequest request = ((ServletRequestAttributes)RequestContextHolder
.getRequestAttributes()).getRequest();
HashMap templateVariables =
(HashMap)request.getAttribute(HandlerMapping.URI_TEMPLATE_VARIABLES_ATTRIBUTE);
return templateVariables.get("year");
I used this approach when writing a testing tool for a product where there were many instances running on multiple different servers and I wanted a unified programming model from my #Controllers but still wanted it to be hitting the right database for the server/deployment combination in the url. Worked like a charm.
The drawback if you are using Hibernate is that all connections will go through a single SessionFactory which will mean you can't take advantage of Hibernate's 2nd level caching as I understand it, but I guess that depends on your needs.

Related

How to access Spring properties from an entity?

I have a spring app, that pushes data in an s3 bucket.
public class Ebook implements Serializable {
#Column(name= "cover_path", unique = true, nullable = true)
private String coverPath;
private String coverDownloadUrl;
#Value("${aws.cloudfront.region}")
private String awsCloudFrontDns;
#PostLoad
public void init(){
// I want to access the property here
System.out.println("PostConstruct");
String coverDownloadUrl = "https://"+awsCloudFrontDns+"/"+coverPath;
}
When a data is pushed, let's say my cover here, I get the key 1/test-folder/mycover.jpg which is the important part of the future http URL of the data.
When I read the data from database, I enter inside #PostLoad method and I want construct the complete URL using the cloudfront value. This value changes frequently so we don't want to save hardly in the database.
How could I do to construct my full path just after reading the data in database?
The only way to do this is to use a service that update the data after using repository to read it? For readbyId it can be a good solution, but for reading list or using other jpa methods, this solutions won't work because I have each time to create a dedicated service for the update.
It doesn't look good for Entity to depend on property.
How about EntityListener.
#Component
public class EbookEntityListener {
#Value("${aws.cloudfront.region}")
private String awsCloudFrontDns;
#PostLoad
void postload(Ebook entity) { entity.updateDns(awsCloudFrontDns); }
}
I recommend trying this way :)

Challenge Persisting Complex Entity using Spring Data JDBC

Considering the complexities involved in JPA we are planning to use Spring Data JDBC for our entities for its simplicity. Below is the sample structure and we have up to 6 child entities. We are able to successfully insert the data into various of these entities with proper foreign key mappings.
Challenge:- We have a workflow process outside of this application that periodically updates the "requestStatus" in the "Request" entity and this is the only field that gets updated after the Request is created. As with spring data JDBC, during the update it deletes all referenced entities and recreates(inserts) it again. This is kind of a heavy operation considering 6 child entities. Are there any workaround or suggestion in terms of how to handle these scenarios
#Table("Request")
public class Request {
private String requestId; // generated in the Before Save Listener .
private String requestStatus;
#Column("requestId")
private ChildEntity1 childEntity1;
public void addChildEntity1(ChildEntity1 childEntityobj) {
this.childEntity1 = childEntityobj;
}
}
#Table("Child_Entity1")
public class ChildEntity1 {
private String entity1Id; // Auto increment on DB
private String name;
private String SSN;
private String requestId;
#MappedCollection(column = "entity1Id", keyColumn = "entity2Id")
private ArrayList<ChildEntity2> childEntity2List = new ArrayList<ChildEntity2>();
#MappedCollection(column = "entity1Id", keyColumn = "entity3Id")
private ArrayList<ChildEntity3> childEntity3List = new ArrayList<ChildEntity3>();
public void addChildEntity2(ChildEntity2 childEntity2obj) {
childEntity2List.add(childEntity2obj);
}
public void addChildEntity3(ChildEntity3 childEntity3obj) {
childEntity3List.add(childEntity3obj);
}
}
#Table("Child_Entity2")
public class ChildEntity2 {
private String entity2Id; // Auto increment on DB
private String partyTypeCode;
private String requestId;
}
#Table(Child_Entity3)
public class ChildEntity3 {
private String entity3Id; // Auto increment on DB
private String PhoneCode;
private String requestId;
}
#Test
public void createandsaveRequest() {
Request newRequest = createRequest(); // using builder to build the object
newRequest.addChildEntity1(createChildEntity1());
newRequest.getChildEntity1().addChildEntity2(createChildEntity2());
newRequest.getChildEntity1().addChildEntity3(createChildEntity3());
requestRepository.save(newRequest);
}
The approach you describe in your comment:
Have a dedicated method performing exactly that update-statement is the right way to do this.
You should be aware though that this does ignore optimistic locking.
So there is a risk that the following might happen
Thread/Session 1: reads an aggregate.
Thread/Session 2: updates a single field as per your question.
Thread/Session 1: writes the aggregate, possibly with other changes, overwriting the change made by Session 2.
To avoid this or similar problems you need to
check that the version of the aggregate root is unchanged from when you loaded it, in order to guarantee that the method doesn't write conflicting changes.
increment the version in order to guarantee that nothing else overwrites the changes made in this method.
This might mean that you need two or more SQL statements which probably means you have to fallback even more to a full custom method where you implement this, probably using an injected JdbcTemplate.

Pattern for accessing data outside of transaction

I have a Spring Boot App with Spring Data JPA with hibernate and MySQL as the data store.
I have 3 layers in my application:
API Service
Application Service
Domain Service ( with Repository)
The role of Application Service is to convert hibernate-backed POJOs to DTOs given some business logic.
POJO
SchoolClass.java
#Column
Long id;
#Column
String name;
#OneToMany(fetch = FetchType.LAZY, mappedBy = "schoolClass")
List<Book> books;
#OneToMany(fetch = FetchType.LAZY, mappedBy = "schoolClass")
List<Student> students;
#OneToMany(fetch = FetchType.LAZY, mappedBy = "schoolClass")
List<Schedule> schedules;
Domain Service - My transaction boundary is at the Domain Service layer.
SchoolClassService.java
#Autowired
private SchoolClassRepository repository;
#Transactional(readOnly = true)
public SchoolClass getClassById(Long id) {
return repository.findById(id);
}
Application Service
SchoolClassAppService.java
#Autowired
private SchoolClassService domainService;
public SchoolClassDto getClassById(Long id) {
SchoolClass schoolClass = domainService.getClassById(id);
// convert POJO to DTO;
return SchoolClassDto;
}
My problem is that at times the child entities on SchoolClass are empty when I try to access them in SchoolClassAppService. Not all of them, but out of the three, two would work fine but the third one would be empty. I tried to mark the children lists to be eagerly fetched, but apparently only two collections can be eagerly fetched before Hibernate starts throwing exceptions and it also does not sound like good practice to always load all the objects. I do not get LazyInitializationException, just the list is empty.
I have tried to just call the getter on all lists in the domain service method before returning it just to load all data for the POJO but that does not seem like a clean practice.
Are there any patterns available which keep the transaction boundaries as close to the persistence layer as possible while still make it viable to process the data even after the transaction has been closed?
Not sure why your collections are sometimes empty, but maybe that just how the data is?
I created Blaze-Persistence Entity Views for exactly that use case. You essentially define DTOs for JPA entities as interfaces and apply them on a query. It supports mapping nested DTOs, collection etc., essentially everything you'd expect and on top of that, it will improve your query performance as it will generate queries fetching just the data that you actually require for the DTOs.
The entity views for your example could look like this
#EntityView(SchoolClass.class)
interface SchoolClassDto {
String getName();
List<BookDto> getBooks();
}
#EntityView(Book.class)
interface BookDto {
// Whatever data you need from Book
}
Querying could look like this
List<SchoolClassDto> dtos = entityViewManager.applySetting(
EntityViewSetting.create(SchoolClassDto.class),
criteriaBuilderFactory.create(em, SchoolClass.class)
).getResultList();
Just keep in mind that DTOs shouldn't just be copies your entities but should be designed to fit your specific use case.

Multiple Repositories for the Same Entity in Spring Data Rest

Is it possible to publish two different repositories for the same JPA entity with Spring Data Rest?
I gave the two repositories different paths and rel-names, but only one of the two is available as REST endpoint.
The point why I'm having two repositories is, that one of them is an excerpt, showing only the basic fields of an entity.
The terrible part is not only that you can only have 1 spring data rest repository (#RepositoryRestResource) per Entity but also that if you have a regular JPA #Repository (like CrudRepository or PagingAndSorting) it will also interact with the spring data rest one (as the key in the map is the Entity itself).
Lost quite a few hours debugging random load of one or the other. I guess that if this is a hard limitation of spring data rest at least an Exception could be thrown if the key of the map is already there when trying to override the value.
The answer seems to be: There is only one repository possible per entity.
I ended up using the #Subselect to create a second immutable entity and bound that to the second JpaRepsotory and setting it to #RestResource(exported = false), that also encourages a separation of concerns.
Employee Example
#Entity
#Table(name = "employee")
public class Employee {
#Id
Long id
String name
...
}
#RestResource
public interface EmployeeRepository extends PagingAndSortingRepository<Employee, Long> {
}
#Entity
#Immutable
#Subselect(value = 'select id, name, salary from employee')
public class VEmployeeSummary {
#Id
Long id
...
}
#RestResource(exported = false)
public interface VEmployeeRepository extends JpaRepository<VEmployeeSummary, Long> {
}
Context
Two packages in the monolithic application had different requirements. One needed to expose the entities for the UI in a PagingAndSortingRepository including CRUD functions. The other was for an aggregating backend report component without paging but with sorting.
I know I could have filtered the results from the PagingAndSorting Repository after requesting Pageable.unpaged() but I just wanted a Basic JPA repository which returned List for some filters.
So, this does not directly answer the question, but may help solve the underlying issue.
You can only have one repository per entity... however, you can have multiple entities per table; thus, having multiple repositories per table.
In a bit of code I wrote, I had to create two entities... one with an auto-generated id and another with a preset id, but both pointing to the same table:
#Entity
#Table("line_item")
public class LineItemWithAutoId {
#Id
#GeneratedValue(generator = "system-uuid")
#GenericGenerator(name = "system-uuid", strategy = "uuid")
private String id;
...
}
#Entity
#Table("line_item")
public class LineItemWithPredefinedId {
#Id
private String id;
...
}
Then, I had a repository for each:
public interface LineItemWithoutId extends Repository<LineItemWithAutoId,String> {
...
}
public interface LineItemWithId extends Repository<LineItemWithPredefinedId,String> {
...
}
For the posted issue, you could have two entities. One would be the full entity, with getters and setters for everything. The other, would be the entity, where there are setters for everything, but only getters for the fields you want to make public. Does this make sense?

How to explictly state that an Entity is new (transient) in JPA?

I am using a Spring Data JpaRepository, with Hibernate as JPA provider.
Normally when working directly with Hibernate, the decision between EntityManager#persist() and EntityManager#save() is up to the programmer. With Spring Data repositories, there is only save(). I do not want to discuss the pros and cons here. Let us consider the following, simple base class:
#MappedSuperclass
public abstract class PersistableObject {
#Id
private String id;
public PersistableObject(){
this.id = UUID.randomUUID().toString();
}
// hashCode() and equals() are implemented based on equality of 'id'
}
Using this base class, the Spring Data repository cannot tell which Entities are "new" (have not been saved to DB yet), as the regular check for id == null clearly does not work in this case, because the UUIDs are eagerly assigned to ensure the correctness of equals() and hashCode(). So what the repository seems to do is to always invoke EntityManager#merge() - which is clearly inefficient for transient entities.
The question is: how do I tell JPA (or Spring Data) that an Entity is new, such that it uses EntityManager#persist() instead of #merge() if possible?
I was thinking about something along these lines (using JPA lifecycle callbacks):
#MappedSuperclass
public abstract class PersistableObject {
#Transient
private boolean isNew = true; // by default, treat entity as new
#PostLoad
private void loaded(){
// a loaded entity is never new
this.isNew = false;
}
#PostPersist
private void saved(){
// a saved entity is not new anymore
this.isNew = false;
}
// how do I get JPA (or Spring Data) to use this method?
public boolean isNew(){
return this.isNew;
}
// all other properties, constructor, hashCode() and equals same as above
}
I'd like to add one more remark here. Even though it only works for Spring Data and not for general JPA, I think it's worth mentioning that Spring provides the Persistable<T> interface which has two methods:
T getId();
boolean isNew();
By implementing this interface (e.g. as in the opening question post), the Spring Data JpaRepositories will ask the entity itself if it is new or not, which can be pretty handy in certain cases.
Maybe you should add #Version column:
#Version
private Long version
in the case of new entity it will be null

Resources