Nested objects data modeling for cassandra - spring

I am using spring data cassandra and having trouble understanding how the data model should be. I understand that cassandra tables are generally denormalized so that means if I have a customer object that looks like this
{
name: "Test Customer",
address: {
city: "New york",
state: "NY"
}
}
The corresponding POJOs would like this
#Table
public class Customer {
#PrimaryKey //just using this as key for this example
private String name;
private Address address;
}
public class Address {
private String city;
private String state;
}
So I want to store only the customer objects but have some way to retrieve the address object associated with the customer object. What are some of the common strategies to handle this.
Should I be using the composite/compound key in some way or create a separate POJO where I can store attributes from both objects in a denormlized form or some other way. Any hints would be appreciated.

In Cassandra tables are created on the query which you want to perform (Query driven model).
So please share the query or set of queries which you want to perform on table.

Related

Designing one-to-one and one-to-many relationships in Spring Data R2DBC

I am exploring possible ideas when it comes to designing the one-to-one and one-to-many relationships while using Spring Data R2DBC.
As Spring Data R2DBC still do not support relationships natively there is still a need to handle those on our own (unlike Spring Data JDBC).
What I would imagine that when it comes to one-to-one mapping, the implementation could look like this:
#Table("account")
public class Account {
#Id
private Long id;
#Transient // one-to-one
private Address address;
}
#Table("address")
public class Address {
#Id
private Integer id;
}
while the database schema would be defined as follows:
--address
CREATE TABLE address
(
id SERIAL PRIMARY KEY
)
--account
CREATE TABLE account
(
id SERIAL PRIMARY KEY,
address_id INTEGER REFERENCES address(id)
)
As the Account object is my aggregate root what I would imagine is that I am supposed to load the Address object with it following the advice of Jens Schaduer:
An aggregate is a cluster of objects that form a unit, which should
always be consistent. Also, it should always get persisted (and
loaded) together.
source: Spring Data JDBC, References, and Aggregates
This leads me to thinking that in case of one-to-one relationships like this one I in fact should have my Account entity defined like this:
#Table("account")
public class Account {
#Id
private Long id;
#Transient // one-to-one
private Address address;
#Column("address_id")
private Integer addressId;
}
and later on to recreate the full Account aggregate entity with an Address I would write something like:
#Service
public class AccountServiceImpl implements AccountService {
private final AccountRepository accountRepository;
private final AddressRepository addressRepository;
public AccountServiceImpl(AccountRepository accountRepository,
AddressRepository addressRepository) {
this.accountRepository = accountRepository;
this.addressRepository = addressRepository;
}
#Override
public Mono<Account> loadAccount(Integer id) {
return accountRepository.getAccountById(id)
.flatMap(account ->
Mono.just(account)
.zipWith(addressRepository.getAddressByAccountId(account.getAddressId()))
.map(result -> {
result.getT1().setAddress(result.getT2());
return result.getT1();
})
);
}
}
If that is not the case, how else should I handle one-to-one relationships while using Spring Data R2DBC?
I think your approach is reasonable. There are just a couple of nitpicks:
Do you need the flatMap -> Mono.just ? Can't you just use map directly?
I wouldn't consider this a service, but a repository (it's just not implemented by Spring Data directly.
You might be able to that code in a after load callback.

Consuming API and store results to database

I need to consume this API: https://api.punkapi.com/v2/beers and after consuming, I have to store it in the database, but only with next fields: internal id, name, description and mean value of the temperature. Any ideas or advice?
The simplest approach would be to have your Model only containing those attributes so that Spring only deserialize them from JSON to object. Something like the following:
public class YourModel {
private long id;
private String name;
private String description;
}
Then in your Service you would have:
ResponseEntity<YourModel> response = restTemplate.getForEntity(url, YourModel.class);
You can then either save YourModel directly to the database (first you need to add some #Annotations if you want to rely on JPA) or you may build another more suited model to your use case.

how to insert a object in mongodb using spring

required format image
I want to object data into MongoDB using spring and I have hardcoded it.
please how to write a schema for that and I have taken it as an example only.
I have a different type of categories in it I have taken only clothes.
please tell me how to write one schema for a different type of categories and query too.
please find the attachment for your reference
I would recommend going though Spring Data MongoDB documentation for specifics on mapping java objects to MongoDB documents. Your case would look similar to:
#Document
public class Clothes {
#Id
private ObjectId id;
private Men men;
private Women women;
// getters & setters
}
You would need to define each sub class but this should be the gist of it.
What you can do is create a simple POJO (Plain Old Java Object) and with that you can insert that object into the data base. The the following example:
#Document
public class OAuthModel implements Serializable {
#Id
String username;
#Indexed
String oAuthID;
#Indexed
String type;
// Getter and Setters and Construct.
}
When I insert this object in the DB by calling:
OAuthModel authModel = new OAuthModel(username,firebaseToken.getUid(), OAuthHosts.GOOGLE.getType());
oAuthRepo.insert(authModel);
It will then be seen as this in the Database:
Keep in mind this will work no matter what your object looks like, you can have hashmaps etc. The should be a built in serialization.

Is there a way to create one JPA entity based on many database tables and do I really have to do this or is it a bad practice?

I'm quite new to Spring Data JPA technology and currently facing one task I can't deal with. I am seeking best practice for such cases.
In my Postgres database I have a two tables connected with one-to-many relation. Table 'account' has a field 'type_id' which is foreign key references to field 'id' of table 'account_type':
So the 'account_type' table only plays a role of dictionary. Accordingly to that I've created to JPA entities (Kotlin code):
#Entity
class Account(
#Id #GeneratedValue var id: Long? = null,
var amount: Int,
#ManyToOne var accountType: AccountType
)
#Entity
class AccountType(
#Id #GeneratedValue var id: Long? = null,
var type: String
)
In my Spring Boot application I'd like to have a RestConroller which will be responsible for giving all accounts in JSON format. To do that I made entities classes serializable and wrote a simple restcontroller:
#GetMapping("/getAllAccounts", produces = [APPLICATION_JSON_VALUE])
fun getAccountsData(): String {
val accountsList = accountRepository.findAll().toMutableList()
return json.stringify(Account.serializer().list, accountsList)
}
where accountRepository is just an interface which extends CrudRepository<Account, Long>.
And now if I go to :8080/getAllAccounts, I'll get the Json of the following format (sorry for formatting):
[
{"id":1,
"amount":0,
"accountType":{
"id":1,
"type":"DBT"
}
},
{"id":2,
"amount":0,
"accountType":{
"id":2,
"type":"CRD"
}
}
]
But what I really want from that controller is just
[
{"id":1,
"amount":0,
"type":"DBT"
},
{"id":2,
"amount":0,
"type":"CRD"
}
]
Of course I can create new serializable class for accounts which will have String field instead of AccountType field and can map JPA Account class to that class extracting account type string from AccountType field. But for me it looks like unnecessary overhead and I believe that there could be a better pattern for such cases.
For example what I have in my head is that probably somehow I can create one JPA entity class (with String field representing account type) which will be based on two database tables and unnecessary complexity of having inner object will be reduced automagically each time I call repository methods :) Moreover I will be able to use this entity class in my business logic without any additional 'wrappers'.
P.s. I read about #SecondaryTable annotation but it looks like it can only work in cases where there is one-to-one relation between two tables which is not my case.
There are a couple of options whic allow clean separation without a DTO.
Firstly, you could look at using a projection which is kind of like a DTO mentioned in other answers but without many of the drawbacks:
https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#projections
#Projection(
name = "accountSummary",
types = { Account.class })
public Interface AccountSummaryProjection{
Long getId();
Integer getAmount();
#Value("#{target.accountType.type}")
String getType();
}
You then simply need to update your controller to call either query method with a List return type or write a method which takes a the proection class as an arg.
https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#projection.dynamic
#GetMapping("/getAllAccounts", produces = [APPLICATION_JSON_VALUE])
#ResponseBody
fun getAccountsData(): List<AccountSummaryProjection>{
return accountRepository.findAllAsSummary();
}
An alternative approach is to use the Jackson annotations. I note in your question you are manually tranforming the result to a JSON String and returning a String from your controller. You don't need to do that if the Jackson Json library is on the classpath. See my controller above.
So if you leave the serialization to Jackson you can separate the view from the entity using a couple of annotations. Note that I would apply these using a Jackson mixin rather than having to pollute the Entity model with Json processing instructions however you can look that up:
#Entity
class Account(
//in real life I would apply these using a Jacksin mix
//to prevent polluting the domain model with view concerns.
#JsonDeserializer(converter = StringToAccountTypeConverter.class)
#JsonSerializer(converter = AccountTypeToStringConverter.class
#Id #GeneratedValue var id: Long? = null,
var amount: Int,
#ManyToOne var accountType: AccountType
)
You then simply create the necessary converters:
public class StringToAccountTypeConverter extends StdConverter<String, CountryType>
implements org.springframework.core.convert.converter.Converter<String, AccountType> {
#Autowired
private AccountTypeRepository repo;
#Override
public AccountType convert(String value) {
//look up in repo and return
}
}
and vice versa:
public class AccountTypeToStringConverter extends StdConverter<String, CountryType>
implements org.springframework.core.convert.converter.Converter<AccountType, String> {
#Override
public String convert(AccountType value) {
return value.getName();
}
}
One of the least complicated ways to achieve what you are aiming for - from the external clients' point of view, at least - has to do with custom serialisation, what you seem to be aware of and what #YoManTaMero has extended upon.
Obtaining the desired class structure might not be possible. The closest I've managed to find is related to the #SecondaryTable annotation but the caveat is this only works for #OneToOne relationships.
In general, I'd pinpoint your problem to the issue of DTOs and Entities. The idea behind JPA is to map the schema and content of your database to code in an accessible but accurate way. It takes away the heavy-lifting of managing SQL queries, but it is designed mostly to reflect your DB's structure, not to map it to a different set of domains.
If the organisation of your DB schema does not exactly match the needs of your system's I/O communication, this might be a sign that:
Your DB has not been designed correctly;
Your DB is fine, but the manageable entities (tables) in it simply do not match directly to the business entities (models) in your external communication.
Should second be the case, Entities should be mapped to DTOs which can then be passed around. Single Entity may map to a few different DTOs. Single DTO might take more than one (related!) entities to be created. This is a good practice for medium-to-large systems in the first place - handing out references to the object that's the direct access point to your database is a risk.
Mind that simply because the id of the accountType is not taking part in your external communication does not mean it will never be a part of your business logic.
To sum up: JPA is designed with ease of database access in mind, not for smoothing out external communication. For that, other tools - such as e.g. Jackson serializer - are used, or certain design patterns - like DTO - are being employed.
One approach to solve this is to #JsonIgnore accountType and create getType method like
#JsonProperty("type")
var getType() {
return accountType.getType();
}

What is the best way to perform custom result sets using JPA or HQL with Spring?

I develop a little Web Service using Spring REST API. I just wanted to know what is the best way to build a custom data result set from a query using HQL or Criterias.
Let's assume we need to handle these 2 entities to perform the following HQL request:
SELECT m.idMission, m.driver, m.dateMission FROM MissionEntity m
The Mission entity (simplified form):
#Entity
public class Mission
{
Integer idMission; //id of the mission
String dateMission; //date of the mission
[...] //Other fields not needed for my request
#ManyToOne
#JoinColumn(name = "driver",
referencedColumnName = "id_user")
User driver; //the driver (user) associated to the mission
[...] //Accessors
};
And the User entity (the driver) (simplified form):
#Entity
public class User
{
Integer idUser; //id of the user
[...] //Others fields not needed for my request
#OneToMany
List<Mission> missionList; //the missions associated to the user
[...] //Accessors
};
JSON output (first result):
[
[ //Mission: depth = 0 (root)
1,
{ //Driver: depth = 1 (mission child -> User)
"idUser": 29,
"shortId": "Adr_Adr",
"lastname": "ADRIAN",
"firstname": null,
"status": "Driver",
"active": 1
},
"05/03/2015"
],
[...]
]
As you can see, I have a custom Mission entity result set (List) which the pattern for each Mission entity is the following:
+ Object
- missionId (Integer)
+ driver (User)
- idUser
- shortId
- lastname
- firstname
- status
- active
- dateMission (String)
But for the purpose of my request I only need for the User entity its firstname and its lastname.
So I need a result set like the following one:
+ Mission (Mission)
- missionId (Integer)
+ driver (User)
- lastname
- firstname
- dateMission (String)
As you can see, I want to keep the same JSON tree structure: a mission entity own a child User entity but this time with a partial set of attributes (only the firstname and the lastname is needed in the set).
For the moment, the only way to solve my problem is to use 2 additionnal POJO classes:
The UserProj class:
public class UserProj
{
private String firstname, lastname;
public UserProj(String firstname, String lastname)
{
this.firstname = firstname;
this.lastname = lastname;
}
[...] //Accessors
};
The MissionProj class:
public class MissionProj
{
private Integer missionId;
private UserProj driver;
private String dateMission;
public MissionProj(Integer missionId,
String driverFirstname, String driverLastname, String dateMission)
{
this.missionId = missionId;
{
this.driver = new UserProj(driverFirstname, driverLastname);
}
this.dateMission = dateMission;
}
[...] //Accessors
};
Here's now the request I use to get the wished JSON output result set:
[
{
"missionId": 1,
"driver": {
"firstname": null,
"lastname": "ADRIAN"
},
"dateMission": "05/03/2015"
},
[...]
]
As you can see, the result set is the one I was looking for! But my problem with that solution is that solution is not scalable. In fact, if I want to perform another custom result set for the User or the Mission entity with one additional field, I will have to create another POJO for this other custom result set. So for me this solution is not really a solution.
I think it should exist a way to do this properly using HQL or Criteria directly but I couldn't find it! Do you have an idea ?
Thanks a lot in advance for your help!
As you can see, the result set is the one I was looking for! But my problem with that solution is that solution is not scalable. In fact, if I want to perform another custom result set for the User or the Mission entity with one additional field, I will have to create another POJO for this other custom result set. So for me this solution is not really a solution.
You are honestly trying to push functionality too low in your architecture which obviously manifests the problem which you describe.
As a bit of background, the SELECT NEW functionality which is exposed by HQL/JPQL and the JPA Criteria was introduced as an easy way to take a query of selectables and inject them into a value object by properly selecting the right constructor method. What the caller does with the constructed value objects is an application concern, not one for the persistence provider.
I believe a more scalable solution would be not to rely on the persistence provider to deal with this but to instead push this back upstream on the Jackson API directly, which is already designed and built to handle this case quite easily.
You'd want Hibernate to return a List<Mission> objects which have the appropriate state from all dependent objects initialized based on your application needs. Then you would either have your service tier or controller transform this list using custom Jackson serializers.
public class MissionSerializer extends StdSerializer<Mission> {
private boolean serializeSpecialField;
public MissionSerializer() {
this( null );
}
public MissionSerializer(Class<Mission> clazz) {
super( clazz );
}
public void setSerializeSpecialField(boolean value) {
this.serializeSpecialField = value;
}
#Override
public void serialize(
Mission value,
JsonGenerator jgen,
SerializerProvider provider)
throws IOException, JsonProcessingException {
// use jgen to write the custom mappings of Mission
// using the serializeSpecialField to control whether
// you serialize that field.
}
}
Then at this point, it's a matter of getting the ObjectMapper and setting the serializer again either in your service tier or controller and toggling whether you serialize the extra field.
The benefit here is that the query remains constant, allowing the persistence provider to cache the entities from the query, improving performance, all while allowing the application tier to transform the results to the final destined output.
This solution does likely imply you may have a single query you have to manage and a single Jackson Serializer to handle this task, so from a technical debt it may be reasonable.
On the other hand, there are good arguments to avoid coupling two solutions for the sake of code reuse.
For example, what if you'd like to change one use case? Now because you reuse the same code, one use case change implies the other is impacted, must be tested, and verified it's still stable and unaffected. By not reusing code, you avoid this potential risk, especially if your test suite doesn't have full coverage.

Resources