I have a spring app, that pushes data in an s3 bucket.
public class Ebook implements Serializable {
#Column(name= "cover_path", unique = true, nullable = true)
private String coverPath;
private String coverDownloadUrl;
#Value("${aws.cloudfront.region}")
private String awsCloudFrontDns;
#PostLoad
public void init(){
// I want to access the property here
System.out.println("PostConstruct");
String coverDownloadUrl = "https://"+awsCloudFrontDns+"/"+coverPath;
}
When a data is pushed, let's say my cover here, I get the key 1/test-folder/mycover.jpg which is the important part of the future http URL of the data.
When I read the data from database, I enter inside #PostLoad method and I want construct the complete URL using the cloudfront value. This value changes frequently so we don't want to save hardly in the database.
How could I do to construct my full path just after reading the data in database?
The only way to do this is to use a service that update the data after using repository to read it? For readbyId it can be a good solution, but for reading list or using other jpa methods, this solutions won't work because I have each time to create a dedicated service for the update.
It doesn't look good for Entity to depend on property.
How about EntityListener.
#Component
public class EbookEntityListener {
#Value("${aws.cloudfront.region}")
private String awsCloudFrontDns;
#PostLoad
void postload(Ebook entity) { entity.updateDns(awsCloudFrontDns); }
}
I recommend trying this way :)
Related
I need to consume this API: https://api.punkapi.com/v2/beers and after consuming, I have to store it in the database, but only with next fields: internal id, name, description and mean value of the temperature. Any ideas or advice?
The simplest approach would be to have your Model only containing those attributes so that Spring only deserialize them from JSON to object. Something like the following:
public class YourModel {
private long id;
private String name;
private String description;
}
Then in your Service you would have:
ResponseEntity<YourModel> response = restTemplate.getForEntity(url, YourModel.class);
You can then either save YourModel directly to the database (first you need to add some #Annotations if you want to rely on JPA) or you may build another more suited model to your use case.
Considering the complexities involved in JPA we are planning to use Spring Data JDBC for our entities for its simplicity. Below is the sample structure and we have up to 6 child entities. We are able to successfully insert the data into various of these entities with proper foreign key mappings.
Challenge:- We have a workflow process outside of this application that periodically updates the "requestStatus" in the "Request" entity and this is the only field that gets updated after the Request is created. As with spring data JDBC, during the update it deletes all referenced entities and recreates(inserts) it again. This is kind of a heavy operation considering 6 child entities. Are there any workaround or suggestion in terms of how to handle these scenarios
#Table("Request")
public class Request {
private String requestId; // generated in the Before Save Listener .
private String requestStatus;
#Column("requestId")
private ChildEntity1 childEntity1;
public void addChildEntity1(ChildEntity1 childEntityobj) {
this.childEntity1 = childEntityobj;
}
}
#Table("Child_Entity1")
public class ChildEntity1 {
private String entity1Id; // Auto increment on DB
private String name;
private String SSN;
private String requestId;
#MappedCollection(column = "entity1Id", keyColumn = "entity2Id")
private ArrayList<ChildEntity2> childEntity2List = new ArrayList<ChildEntity2>();
#MappedCollection(column = "entity1Id", keyColumn = "entity3Id")
private ArrayList<ChildEntity3> childEntity3List = new ArrayList<ChildEntity3>();
public void addChildEntity2(ChildEntity2 childEntity2obj) {
childEntity2List.add(childEntity2obj);
}
public void addChildEntity3(ChildEntity3 childEntity3obj) {
childEntity3List.add(childEntity3obj);
}
}
#Table("Child_Entity2")
public class ChildEntity2 {
private String entity2Id; // Auto increment on DB
private String partyTypeCode;
private String requestId;
}
#Table(Child_Entity3)
public class ChildEntity3 {
private String entity3Id; // Auto increment on DB
private String PhoneCode;
private String requestId;
}
#Test
public void createandsaveRequest() {
Request newRequest = createRequest(); // using builder to build the object
newRequest.addChildEntity1(createChildEntity1());
newRequest.getChildEntity1().addChildEntity2(createChildEntity2());
newRequest.getChildEntity1().addChildEntity3(createChildEntity3());
requestRepository.save(newRequest);
}
The approach you describe in your comment:
Have a dedicated method performing exactly that update-statement is the right way to do this.
You should be aware though that this does ignore optimistic locking.
So there is a risk that the following might happen
Thread/Session 1: reads an aggregate.
Thread/Session 2: updates a single field as per your question.
Thread/Session 1: writes the aggregate, possibly with other changes, overwriting the change made by Session 2.
To avoid this or similar problems you need to
check that the version of the aggregate root is unchanged from when you loaded it, in order to guarantee that the method doesn't write conflicting changes.
increment the version in order to guarantee that nothing else overwrites the changes made in this method.
This might mean that you need two or more SQL statements which probably means you have to fallback even more to a full custom method where you implement this, probably using an injected JdbcTemplate.
Using spring-data-jpa and working on getting data out of table where there are about a dozen columns which are used in queries to find particular rows, and then a payload column of clob type which contains the actual data that is marshalled into java objects to be returned.
Entity object very roughly would be something like
#Entity
#Table(name = "Person")
public class Person {
#Column(name="PERSON_ID", length=45) #Id private String personId;
#Column(name="NAME", length=45) private String name;
#Column(name="ADDRESS", length=45) private String address;
#Column(name="PAYLOAD") #Lob private String payload;
//Bunch of other stuff
}
(Whether this approach is sensible or not is a topic for a different discussion)
The clob column causes performance to suffer on large queries ...
In an attempt to improve things a bit, I've created a separate entity object ... sans payload ...
#Entity
#Table(name = "Person")
public class NotQuiteAWholePerson {
#Column(name="PERSON_ID", length=45) #Id private String personId;
#Column(name="NAME", length=45) private String name;
#Column(name="ADDRESS", length=45) private String address;
//Bunch of other stuff
}
This gets me a page of NotQuiteAPerson ... I then query for the page of full person objects via the personIds.
The hope is that in not using the payload in the original query, which could filtering data over a good bit of the backing table, I only concern myself with the payload when I'm retrieving the current page of objects to be viewed ... a much smaller chunk.
So I'm at the point where I want to map the contents of the original returned Page of NotQuiteAWholePerson to my List of Person, while keeping all the Paging info intact, the map method however only takes a Converter which will iterate over the NotQuiteAWholePerson objects ... which doesn't quite fit what I'm trying to do.
Is there a sensible way to achieve this ?
Additional clarification for #itsallas as to why existing map() will not suffice..
PageImpl::map has
#Override
public <S> Page<S> map(Converter<? super T, ? extends S> converter) {
return new PageImpl<S>(getConvertedContent(converter), pageable, total);
}
Chunk::getConvertedContent has
protected <S> List<S> getConvertedContent(Converter<? super T, ? extends S> converter) {
Assert.notNull(converter, "Converter must not be null!");
List<S> result = new ArrayList<S>(content.size());
for (T element : this) {
result.add(converter.convert(element));
}
return result;
}
So the original List of contents is iterated through ... and a supplied convert method applied, to build a new list of contents to be inserted into the existing Pageable.
However I cannot convert a NotQuiteAWholePerson to a Person individually, as I cannot simply construct the payload... well I could, if I called out to the DB for each Person by Id in the convert... but calling out individually is not ideal from a performance perspective ...
After getting my Page of NotQuiteAWholePerson I am querying for the entire List of Person ... by Id ... in one call ... and now I am looking for a way to substitute the entire content list ... not interively, as the existing map() does, but in a simple replacement.
This particular use case would also assist where the payload, which is json, is more appropriately persisted in a NoSql datastore like Mongo ... as opposed to the sql datastore clob ...
Hope that clarifies it a bit better.
You can avoid the problem entirely with Spring Data JPA features.
The most sensible way would be to use Spring Data JPA projections, which have good extensive documentation.
For example, you would first need to ensure lazy fetching for your attribute, which you can achieve with an annotation on the attribute itself.
i.e. :
#Basic(fetch = FetchType.LAZY) #Column(name="PAYLOAD") #Lob private String payload;
or through Fetch/Load Graphs, which are neatly supported at repository-level.
You need to define this one way or another, because, as taken verbatim from the docs :
The query execution engine creates proxy instances of that interface at runtime for each element returned and forwards calls to the exposed methods to the target object.
You can then define a projection like so :
interface NotQuiteAWholePerson {
String getPersonId();
String getName();
String getAddress();
//Bunch of other stuff
}
And add a query method to your repository :
interface PersonRepository extends Repository<Person, String> {
Page<NotQuiteAWholePerson> findAll(Pageable pageable);
// or its dynamic equivalent
<T> Page<T> findAll(Pageable pageable, Class<T>);
}
Given the same pageable, a page of projections would refer back to the same entities in the same session.
If you cannot use projections for whatever reason (namely if you're using JPA < 2.1 or a version of Spring Data JPA before projections), you could define an explicit JPQL query with the columns and relationships you want, or keep the 2-entity setup. You could then map Persons and NotQuiteAWholePersons to a PersonDTO class, either manually or (preferably) using your object mapping framework of choice.
NB. : There are a variety of ways to use and setup lazy/eager relations. This covers more in detail.
I have a spring-boot application (1.4RC1, I know it's RC, but Spring Data Redis 1.7.2 is not) where I'm using spring-boot-starter-redis.
The application uses a Spring Data Repository (CrudRepository) which should save an object (using #RedisHash annotation) with String and Boolean properties and one custom class property, which also has only Strings and Longs as properties.
When I save an object (via the repository), everything went fine and I can see all the properties in the database as I would expect.
When I want to read the data from the database (via the repository) I only get the properties from the parent object. The custom class property is null.
I would expect to get the property loaded from the database as well. As the documentation states you can write a custom converter, but since I don't need to do that, when I want to write the data, I shouldn't need to write a reading converter as well.
I wonder if I need to annotate the custom class property, but I couldn't find anything in the documentation. Can you point me in the right direction?
The classes are as follows:
Class sample:
#Data
#EqualsAndHashCode(exclude = {"isActive", "sampleCreated", "sampleConfiguration"})
#RedisHash
public class Sample {
#Id
private String sampleIdentifier;
private Boolean isActive;
private Date sampleCreated;
private SampleConfiguration sampleConfiguration;
public Sample(String sampleIdentifier, SampleConfiguration sampleConfiguration){
this.sampleIdentifier = sampleIdentifier;
this.sampleConfiguration = sampleConfiguration;
}
}
Class SampleConfiguration:
#Data
public class SampleConfiguration {
private String surveyURL;
private Long blockingTime;
private String invitationTitle;
private String invitationText;
private String participateButtonText;
private String doNotParticipateButtonText;
private String optOutButtonText;
private Long frequencyCappingThreshold;
private Long optOutBlockingTime;
}
I added #NoArgsConstructor to my Sample class as Christoph Strobl suggested. Then the repository reads the SampleConfiguration correctly. Thanks, Christoph!
First let me say I'm a complete novice with Spring AOP, and I apologize if this is a duplicate question.
Here's my scenario:
Let's say I have the following domain class:
#Entity(name="MyTable")
#Table(name="MY_TABLE")
public class MyTable {
private static final long serialVersionUID = 1234567890123456L;
#Id
#Column(name = "USER_ID")
private Long userID;
#Transient
private String key;
#Column(name = "KEY")
private String secureKey;
/* Other columns */
/* Getters and Setters */
}
and I have the following JPARepository class to manage it:
#Repository
public interface MyTableRepository extends JpaRepository<MyTable, Long> {
/* findBy methods */
}
As you can see, I have a secureKey field and a transient key field. In this case secureKey is an encrypted version of key.
What I need is for the secureKey value to be populated before a domain object is saved, and for the key value to be populated after a domain object is fetched. (This is a trivial example but in the real case I have multiple transient and encrypted values.) The idea is for the secure values to be persisted to the DB, but users of the domain class will only need to work with the "insecure" values.
Currently I'm handling this in my service layer. After I call a fetch method I'm populating the transient values, and before calling a save method I'm populating the "secure" values. This is working as expected but ideally I'd like this to be managed transparently, because now the burden is on each developer to remember to update those values after fetching or before saving.
I'm assuming the best way to handle this would be through some AOP class, but I confess I have little to no idea where to begin there. Is this a common scenario, and if so, would someone be willing to point me in the right direction? Also, if you have a suggestion for a better way to implement this decrypted/encrypted field pair scenario, please let me know.
Ideally I'd like to be able to add an annotation to both the secure and insecure fields, maybe pointing to each other, maybe something like:
#Insecure(secureValue = "secureKey")
#Transient
private String key;
#Secure(insecureValue = "key")
#Column(name = "KEY")
private String secureKey;
Any assistance you could provide is most appreciated.
Thanks,
B.J.
I think Spring AOP isn't the correct technology in your use case, i would recommend to use EntityListeners.
Hibernate: https://docs.jboss.org/hibernate/entitymanager/3.5/reference/en/html/listeners.html
Eclipselink: https://wiki.eclipse.org/EclipseLink/Release/2.5/JPA21#CDI_Entity_Listeners