How can i map rows of two tables that refers each other?
For example there are Employee and Department tables. Employee has a reference to department model which is the department of the employee and Department has a reference to employee model which is the manager of the department. So how can i do map rows using spring RowMapper.
Thanks,
How can i map rows of two tables that refers each other?
for example like this:
public class TwoTablesRowMapper implements RowMapper<Map<String, Object>> {
/**
* Map data from select over 2 tables e.g.:
*
* select
* A.foo as afoo,
* B.bar as bbar
* from PARENT A,
* CHILD B
* where A.ID = B.ID
*
*
* #param rs
* #param rowNum
* #return
* #throws SQLException
*/
public Map<String, Object> mapRow(ResultSet rs, int rowNum) throws SQLException {
Map<String, Object> resultMap = new HashMap<String, Object>();
// instead of a map one could fill an object
// e.g.: myObject.set.afoo(afoo)
resultMap.put("afoo", rs.getString("afoo"));
resultMap.put("bbar", rs.getString("bbar"));
return resultMap;
}
}
for the SQL part i recommend you create a new question with specific SQL details (tables, relations, etc.) and tagged sql, it should find more (sql-savvy) viewers this way
Related
I have a test case which tries to query Child entity changes by its instanceId, it throws an Exception:
#TypeName("EntityOne")
class EntityOne {
#Id int id
String name
List<EntityTwo> entityTwos
EntityOne(int id, String name, List<EntityTwo> entityTwos) {
this.id = id
this.name = name
this.entityTwos = entityTwos
}
}
#TypeName("EntityTwo")
class EntityTwo {
#Id int id
String name
#Id int entityOneId
EntityTwo(int id, String name, entityOneId) {
this.id = id
this.name = name
this.entityOneId = entityOneId
}
}
These are the data audited
oldOne = new EntityOne(1, "EntityOne", [new EntityTwo(1, "EntityTwo",1)])
newOne = new EntityOne(1, "EntityOne", [new EntityTwo(1, "EntityTwo",1),
new EntityTwo(2, "EntityTwoOne",1)])
This is query throwing exception
entityTwoChanges = javers.findChanges(QueryBuilder.byInstanceId(1, EntityTwo) // Error is thrown
.withNewObjectChanges()
.withChildValueObjects()
.build())
Exception:
java.lang.Integer cannot be cast to java.util.Map
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.util.Map
at org.javers.core.metamodel.type.InstanceIdFactory.dehydratedLocalId(InstanceIdFactory.java:48)
at org.javers.core.metamodel.type.InstanceIdFactory.create(InstanceIdFactory.java:22)
at org.javers.core.metamodel.type.EntityType.createIdFromInstanceId(EntityType.java:127)
at org.javers.core.metamodel.object.GlobalIdFactory.createInstanceId(GlobalIdFactory.java:115)
at org.javers.core.metamodel.object.GlobalIdFactory.createFromDto(GlobalIdFactory.java:127)
at org.javers.repository.jql.FilterDefinition$IdFilterDefinition.compile(FilterDefinition.java:27)
at org.javers.repository.jql.JqlQuery.compile(JqlQuery.java:120)
at org.javers.repository.jql.QueryCompiler.compile(QueryCompiler.java:16)
at org.javers.repository.jql.ChangesQueryRunner.queryForChanges(ChangesQueryRunner.java:20)
at org.javers.repository.jql.QueryRunner.queryForChanges(QueryRunner.java:48)
at org.javers.core.JaversCore.findChanges(JaversCore.java:196)
at com.example.CaseQueryByCompositeKey.should able to query audit changes by composite key(CaseQueryByCompositeKey.groovy:60)
and also Is there way to query by composite key in JaVers?
It worked after passing instanceId as map:
entityTwoChanges = javers.findChanges(QueryBuilder.byInstanceId([id: 1, entityOneId: 1], EntityTwo)
.withNewObjectChanges()
.withChildValueObjects()
.build())
It's explicitly shown in the javadoc
/**
* Query for selecting Changes, Snapshots or Shadows for a given Entity instance.
* <br/><br/>
*
* For example, last Changes on "bob" Person:
* <pre>
* javers.findChanges( QueryBuilder.byInstanceId("bob", Person.class).build() );
* </pre>
*
* #param localId Value of an Id-property. When an Entity has Composite-Id (more than one Id-property) —
* <code>localId</code> should be <code>Map<String, Object></code> with
* Id-property name to value pairs.
* #see CompositeIdExample.groovy
*/
public static QueryBuilder byInstanceId(Object localId, Class entityClass){
Validate.argumentsAreNotNull(localId, entityClass);
return new QueryBuilder(new IdFilterDefinition(instanceId(localId, entityClass)));
}
try to read the javadoc first, before asking questins about a method
Hello I am using Cassandra to save user data . I want to store data of a user for only 24 hours so I am giving a ttl for 24 hours. For each user there are multiple entries. So I want to batch insert data for each user instead of multiple calls to data base . I am using Cassandra operations to give ttl . I am able to give ttl for single record . How to provide ttl when inserting data in batches
public class CustomizedUserFeedRepositoryImpl<T> implements CustomizedUserFeedRepository<T> {
private CassandraOperations cassandraOperations;
#Autowired
CustomizedUserFeedRepositoryImpl(CassandraOperations cassandraOperations){
this.cassandraOperations = cassandraOperations;
}
#Override
public <S extends T> S save(S entity, int ttl){
InsertOptions insertOptions;
if(ttl == 0) {
insertOptions = InsertOptions.builder().ttl(Duration.ofHours(24)).build();
} else {
insertOptions = InsertOptions.builder().ttl(ttl).build();
}
cassandraOperations.insert(entity,insertOptions);
return entity;
}
#Override
public void saveAllWithTtl(java.lang.Iterable<T> entities, int ttl){
entities.forEach(entity->{
save(entity,ttl);
});
}
}
As you can see I have to iterate over the list make and make database calls for each record . The batch operation cassandraOperations.batchOps().insert() only takes list of objects . How to set ttl for each record when using batchops() fucntion ?
/**
* Add a collection of inserts with given {#link WriteOptions} to the batch.
*
* #param entities the entities to insert; must not be {#literal null}.
* #param options the WriteOptions to apply; must not be {#literal null}.
* #return {#code this} {#link CassandraBatchOperations}.
* #throws IllegalStateException if the batch was already executed.
* #since 2.0
*/
CassandraBatchOperations insert(Iterable<?> entities, WriteOptions options);
You can use insert(Iterable<?> entities, WriteOptions options) method
#EqualsAndHashCode(callSuper = true)
public class WriteOptions extends QueryOptions {
private static final WriteOptions EMPTY = new WriteOptionsBuilder().build();
private final Duration ttl;
private final #Nullable Long timestamp;
batchOperations.insert(entity, WriteOptions.builder().ttl(20).build());
Link.java
#Entity
#Table(name = "LINK")
#AttributeOverride(name="id", column=#Column(name="LINK_ID"))
public class Link extends AbstractAuditableEntity<Integer> {
/**
*
*/
private static final long serialVersionUID = 3825555385014396995L;
#Column(name="NAME")
private String name;
#Column(name="UI_SREF")
private String uiSref;
#ManyToOne
#JoinColumn(name="PARENT_LINK_ID")
private Link parentLink;
#OneToMany(mappedBy="parentLink", fetch = FetchType.EAGER)
private List<Link> childLinks;
/**
* #return the name
*/
public String getName() {
return name;
}
/**
* #param name the name to set
*/
public void setName(String name) {
this.name = name;
}
/**
* #return the uiSref
*/
public String getUiSref() {
return uiSref;
}
/**
* #param uiSref the uiSref to set
*/
public void setUiSref(String uiSref) {
this.uiSref = uiSref;
}
/**
* #return the parentLink
*/
public Link getParentLink() {
return parentLink;
}
/**
* #param parentLink the parentLink to set
*/
public void setParentLink(Link parentLink) {
this.parentLink = parentLink;
}
/**
* #return the childLinks
*/
public List<Link> getChildLinks() {
return childLinks;
}
/**
* #param childLinks the childLinks to set
*/
public void setChildLinks(List<Link> childLinks) {
this.childLinks = childLinks;
}
}
LinkRepository .java
public interface LinkRepository extends BaseRepository<Integer, Link> {
#Query("select distinct p from Link l JOIN fetch l.parentLink p where l.id in (select lar.link.id from LinkAccessRole lar where lar.accessRoleLu in ?1) and p.id in (select lar.link.id from LinkAccessRole lar where lar.accessRoleLu in ?1)")
public List<Link> getNavigationByaccessRoleLuList(List<AccessRoleLu> accessRoleLu);
}
Link_Table
Link_Access_Role Table
generated Queries:
SELECT DISTINCT t0.LINK_ID, t0.CREATED_BY_ID, t0.CREATED_DATE, t0.LAST_MODIFIED_BY_ID, t0.LAST_MODIFIED_DATE, t0.NAME, t0.UI_SREF, t0.PARENT_LINK_ID FROM LINK t0, LINK t1 WHERE ((t1.LINK_ID IN (SELECT t2.LINK_ID FROM LINK_ACCESS_ROLE t3, LINK t2 WHERE ((t3.ACCESS_ROLE_ID IN (?,?)) AND (t2.LINK_ID = t3.LINK_ID))) AND t0.LINK_ID IN (SELECT t4.LINK_ID FROM LINK_ACCESS_ROLE t5, LINK t4 WHERE ((t5.ACCESS_ROLE_ID IN (?,?)) AND (t4.LINK_ID = t5.LINK_ID)))) AND (t0.LINK_ID = t1.PARENT_LINK_ID))
bind => [4 parameters bound]
SELECT LINK_ID, CREATED_BY_ID, CREATED_DATE, LAST_MODIFIED_BY_ID, LAST_MODIFIED_DATE, NAME, UI_SREF, PARENT_LINK_ID FROM LINK WHERE (PARENT_LINK_ID = ?)
bind => [1 parameter bound]
SELECT LINK_ID, CREATED_BY_ID, CREATED_DATE, LAST_MODIFIED_BY_ID, LAST_MODIFIED_DATE, NAME, UI_SREF, PARENT_LINK_ID FROM LINK WHERE (PARENT_LINK_ID = ?)
bind => [1 parameter bound]
I get one query for each child related to the fetched parent Regardless it has the access role or not.
i want to fetch the parents and its childs that have access role not all childs that related to that parent.
The only way that you can fetch a parent entity and have one of its collections populated with a subset of entries based on some criteria is by using Hibernate's proprietary filters.
I'm not certain whether the other JPA providers provide some proprietary solution either, but JPA itself doesn't offer this directly.
You first need to register a filter definition using #FilterDef and then you need to reference the filter's definition using the #Filter on your collection property.
The hard part here is that you can't rely on Spring data's #Query or their repository implementation generation process to help. You will need to use a real implementation so that you can manually enable this hibernate filter before you query the parent entity.
Filter filter = session.enableFilter( "link-with-restrictions-by-roles" );
filter.setParameter( "roles", yourRolesList );
return session.createQuery( ... ).getResultList();
The documentation describes the use of #Filter and #FilterDef in detail. You can also find another post of mine where I give slightly more implementation details here.
Is there any existing utility to do in a better/faster way a DB insert?
Now this is what I'm using (the are a lot of fields, I truncated the field list):
public void insert(Ing ing){
String[] fields=new String[]{"field1","field2","field3"};
Object[] params=new Object[]{ing.getField1(),ing.getField2(),ing.getField3()};
String[] paramsPH=new String[fields.length];
for(int i=0;i<paramsPH.length;i++) paramsPH[i]="?";
String sql= "INSERT INTO ing("+StringUtils.join(fields,",")+") VALUES ("+StringUtils.join(paramsPH,",")+");";
getJdbcTemplate().update(sql,params);
}
Check this :
import java.util.LinkedHashMap;
import org.apache.commons.lang3.StringUtils;
import org.springframework.jdbc.core.JdbcTemplate;
JdbcTemplate jt = new JdbcTemplate...// some instance... ;
String tableName="nameDateTable";//your happy table
LinkedHashMap<String,Object>map= new LinkedHashMap<String,Object>();
map.put("col1Name","blabla"); //column name and value
map.put("dateAdd",new Date());//column name and value
// etc..
// You can place any map here (LinkedHashMap!). Here is a magical query:
String sql = "INSERT INTO "+tableName+" (\""+StringUtils.join(map.keySet(), "\",\"")+"\") VALUES ("+StringUtils.repeat("?", ",", map.size())+");";
jt.update(sql, map.values().toArray());
The most important in this solution is
String sql = "INSERT INTO "+tableName+"
(\""+StringUtils.join(map.keySet(), "\",\"")+"\") VALUES ("+StringUtils.repeat("?", ",", map.size())+");";
jt.update(sql, map.values().toArray());
and LinkedHashMap.
In my Spring JdbcTemplate projects, I ususally create a generic BaseDao<T> class that has a method saveObject(T obj).
to achieve this, I use SimpleJdbcInsert like this:
//Constants, from BaseDAO interface that this method implements
String TABLE_NAME = "tableName";
String GENERATED_KEY = "generatedKey";
/**
* Save an object using a {#link BaseObjectMapper} returned from the method {#link #getObjectMapper()}
* Returns the generated key if the map generated by the {#link BaseObjectMapper} contains an entry for {#value #GENERATED_KEY}
* #param the object to be saved
*/
#Override
public int saveObject(T obj){
MapSqlParameterSource params = new MapSqlParameterSource();
//the mapper must transform an object to a map
//and add the table name where to insert, and if any, a generated key
Map<String, Object> paramsMap = getObjectMapper().mapObject(obj);
String table = (String) paramsMap.remove(TABLE_NAME);
if(table == null){
throw new IllegalArgumentException("The ObjectMapper of "+obj.getClass()+" must return the table name among the result map of mapObject method");
}
String generatedKey = (String) paramsMap.remove(GENERATED_KEY);
String[] colNames = paramsMap.keySet().toArray(new String[paramsMap.keySet().size()]);
for(String col: colNames){
params.addValue(col, paramsMap.get(col));
}
//You can have it as a class attribute and create it once the DAO is being instantiated
SimpleJdbcInsert genericJdbcInsert = new SimpleJdbcInsert(jdbcInsert.getJdbcTemplate().getDataSource())
.withSchemaName(currentSchema).withTableName(table)
.usingColumns(colNames);
if(generatedKey != null){
genericJdbcInsert = genericJdbcInsert.usingGeneratedKeyColumns(generatedKey);
return genericJdbcInsert.executeAndReturnKey(paramsMap).intValue();
}else{
genericJdbcInsert.execute(params);
}
return 0;
}
protected BaseObjectMapper<T> getObjectMapper(){
//Implement it in your concrete DAO classes
throw new UnsupportedOperationException("You must implemnt this method in your concrete DAO implementation");
}
Here is the BaseObjectMapper interface:
import java.util.Map;
import org.springframework.jdbc.core.RowMapper;
import com.atlasaas.ws.dao.BaseDao;
import com.atlasaas.ws.entities.BaseEntity;
public interface BaseObjectMapper<T extends BaseEntity> extends RowMapper<T>{
/**
* Method to transform an object into a {#link Map}
* The result map must contain all columns to be inserted as keys
* It also must contain the Table name corresponding to the given object
* The table name must be associated to the key of value: {#link BaseDao#TABLE_NAME}
* Optionally, if you want your save methods to return a generated primary key value
* you should include an entry referencing the the generated column name. This entry
* must then be associated to the key of value: {#link BaseDao#GENERATED_KEY}
* #param obj The object to be transformed
* #return the result of this object transformation
*/
Map<String, Object> mapObject(T obj);
}
If you really want to use SQL in your code, you can use:
org.springframework.jdbc.core.namedparam.NamedParameterJdbcOperations#(String sql, SqlParameterSource paramSource)
where your SQL string would be something like this:
insert into SOME_TABLE(COL1,COL2,COL3) values (:col1Val,:col2Val,:col3Val)
and your SqlParameterSource is built this way:
MapSqlParameterSource params = new MapSqlParameterSource();
params.addValue("col1Val", val1);
params.addValue("col2Val", val2);
params.addValue("col3Val", val3);
I hope this helps
You can use parameterized SQL to make it a bit simpler
Your code would look something like this
String sql = "INSERT INTO ing(field1, field2, field3) values(?, ?, ?)";
Object[] params=new Object[]{ing.getField1(),ing.getField2(),ing.getField3()};
getJdbcTemplate().update(sql,params);
I worked on both row mapper and resultset extractor call back interfaces.I found difference i.e.,
1.Row mapper can be processing per row basis.But Resultset extractor we can naviagte all rows and return type is object.
Is there any difference other than above?.How the works Rowmapper internal and return type is list?.
Basic difference is with ResultsetExtractor you will need to iterate through the result set yourself, say in while loop.
This interface provides you processing of the entire ResultSet at once. The implemetation of Interface method extractData(ResultSet rs) will contain that manual iteration code.
See one implementation of ResultsetExtractor
while some callback handlers like RowCallbackHandler, the interface method processRow(ResultSet rs) loops for you.
RowMapper can be used both was for mapping each row, or entire rows.
For entire rows Object (by template method jdbcTemplate.query())
public List findAll() {
String sql = "SELECT * FROM EMPLOYEE";
return jdbcTemplate.query(sql, new EmployeeRowMapper());
}
without casting will work
For individual object (with Template method jdbcTemplate.queryForObject())
#SuppressWarnings({ "unchecked", "rawtypes" })
public Employee findById(int id) {
String sql = "SELECT * FROM EMPLOYEE WHERE ID = ?";
// jdbcTemplate = new JdbcTemplate(dataSource);
Employee employee = (Employee) jdbcTemplate.queryForObject(sql, new EmployeeRowMapper(), id );
// Method 2 very easy
// Employee employee = (Employee) jdbcTemplate.queryForObject(sql, new Object[] { id }, new BeanPropertyRowMapper(Employee.class));
return employee;
}
#SuppressWarnings("rawtypes")
public class EmployeeRowMapper implements RowMapper {
public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
Employee employee = new Employee();
employee.setId(rs.getInt("ID"));
employee.setName(rs.getString("NAME"));
employee.setAge(rs.getInt("AGE"));
return employee;
}
}
Best Use cases:
Row Mapper: When each row of a ResultSet maps to a domain Object, can be implemented as private inner class.
RowCallbackHandler: When no value is being returned from callback method for each row, e.g. writing row to a file, converting rows to a XML, Filtering rows before adding to collection. Very efficient as ResultSet to Object mapping is not done here.
ResultSetExtractor: When multiple rows of ResultSet map to a single Object. Like when doing complex joins in a query one may need to have access to entire ResultSet instead of single row of rs to build complex Object and you want to take full control of ResultSet. Like Mapping the rows returned from the join of TABLE1 and TABLE2 to an fully-reconstituted TABLE aggregate.
ParameterizedRowMapper is used to create complex objects
JavaDoc of ResultSetExtractor:
This interface is mainly used within the JDBC framework itself. A RowMapper is usually a simpler choice for ResultSet processing, mapping one result object per row instead of one result object for the entire ResultSet.
ResultSetExtractor is suppose to extract the whole ResultSet (possibly multiple rows), while RowMapper is feeded with row at a time.
Most the time, ResultSetExtractor will loop the ResultSet and use RowMapper, snippet example of Spring RowMapperResultSetExtractor:
List<T> results = (this.rowsExpected > 0 ? new ArrayList<T>(this.rowsExpected) : new ArrayList<T>());
int rowNum = 0;
while (rs.next()) {
results.add(this.rowMapper.mapRow(rs, rowNum++));
}
return results;
Pay attention, ALL results will be transformed, this can create Out Of Memory exception.
See also
RowMapperResultSetExtractor
RowMapper: To process one record of ResultSet at a time.
ResultSetExtractor: To process multiple records of ResultSet at a time.
I think one place where a ResultSetExtractor could be advantageous is when you have a result set (like from a call to a stored procedure) and a row mapper, and want to process them like is done under the covers in the jdbcTemplate methods, such as query(String sql, RowMapper rowMapper). In this case you can save yourself from having to manually iterate over the result set by using the ResultSetExtractor instead of just the RowMapper.
For example:
RowMapper
ResultSet resultSet = cs.executeQuery();
int row = 0;
DateRowMapper dateRowMapper = new DateRowMapper();
List<String> dates = new ArrayList<>();
while (resultSet.next()) {
dates.add(dateRowMapper.mapRow(resultSet, ++row));
}
return dates;
ResultSetExtractor
ResultSet resultSet = callableStatement.executeQuery();
return new RowMapperResultSetExtractor<>(new DateRowMapper()).extractData(resultSet);