Is there any existing utility to do in a better/faster way a DB insert?
Now this is what I'm using (the are a lot of fields, I truncated the field list):
public void insert(Ing ing){
String[] fields=new String[]{"field1","field2","field3"};
Object[] params=new Object[]{ing.getField1(),ing.getField2(),ing.getField3()};
String[] paramsPH=new String[fields.length];
for(int i=0;i<paramsPH.length;i++) paramsPH[i]="?";
String sql= "INSERT INTO ing("+StringUtils.join(fields,",")+") VALUES ("+StringUtils.join(paramsPH,",")+");";
getJdbcTemplate().update(sql,params);
}
Check this :
import java.util.LinkedHashMap;
import org.apache.commons.lang3.StringUtils;
import org.springframework.jdbc.core.JdbcTemplate;
JdbcTemplate jt = new JdbcTemplate...// some instance... ;
String tableName="nameDateTable";//your happy table
LinkedHashMap<String,Object>map= new LinkedHashMap<String,Object>();
map.put("col1Name","blabla"); //column name and value
map.put("dateAdd",new Date());//column name and value
// etc..
// You can place any map here (LinkedHashMap!). Here is a magical query:
String sql = "INSERT INTO "+tableName+" (\""+StringUtils.join(map.keySet(), "\",\"")+"\") VALUES ("+StringUtils.repeat("?", ",", map.size())+");";
jt.update(sql, map.values().toArray());
The most important in this solution is
String sql = "INSERT INTO "+tableName+"
(\""+StringUtils.join(map.keySet(), "\",\"")+"\") VALUES ("+StringUtils.repeat("?", ",", map.size())+");";
jt.update(sql, map.values().toArray());
and LinkedHashMap.
In my Spring JdbcTemplate projects, I ususally create a generic BaseDao<T> class that has a method saveObject(T obj).
to achieve this, I use SimpleJdbcInsert like this:
//Constants, from BaseDAO interface that this method implements
String TABLE_NAME = "tableName";
String GENERATED_KEY = "generatedKey";
/**
* Save an object using a {#link BaseObjectMapper} returned from the method {#link #getObjectMapper()}
* Returns the generated key if the map generated by the {#link BaseObjectMapper} contains an entry for {#value #GENERATED_KEY}
* #param the object to be saved
*/
#Override
public int saveObject(T obj){
MapSqlParameterSource params = new MapSqlParameterSource();
//the mapper must transform an object to a map
//and add the table name where to insert, and if any, a generated key
Map<String, Object> paramsMap = getObjectMapper().mapObject(obj);
String table = (String) paramsMap.remove(TABLE_NAME);
if(table == null){
throw new IllegalArgumentException("The ObjectMapper of "+obj.getClass()+" must return the table name among the result map of mapObject method");
}
String generatedKey = (String) paramsMap.remove(GENERATED_KEY);
String[] colNames = paramsMap.keySet().toArray(new String[paramsMap.keySet().size()]);
for(String col: colNames){
params.addValue(col, paramsMap.get(col));
}
//You can have it as a class attribute and create it once the DAO is being instantiated
SimpleJdbcInsert genericJdbcInsert = new SimpleJdbcInsert(jdbcInsert.getJdbcTemplate().getDataSource())
.withSchemaName(currentSchema).withTableName(table)
.usingColumns(colNames);
if(generatedKey != null){
genericJdbcInsert = genericJdbcInsert.usingGeneratedKeyColumns(generatedKey);
return genericJdbcInsert.executeAndReturnKey(paramsMap).intValue();
}else{
genericJdbcInsert.execute(params);
}
return 0;
}
protected BaseObjectMapper<T> getObjectMapper(){
//Implement it in your concrete DAO classes
throw new UnsupportedOperationException("You must implemnt this method in your concrete DAO implementation");
}
Here is the BaseObjectMapper interface:
import java.util.Map;
import org.springframework.jdbc.core.RowMapper;
import com.atlasaas.ws.dao.BaseDao;
import com.atlasaas.ws.entities.BaseEntity;
public interface BaseObjectMapper<T extends BaseEntity> extends RowMapper<T>{
/**
* Method to transform an object into a {#link Map}
* The result map must contain all columns to be inserted as keys
* It also must contain the Table name corresponding to the given object
* The table name must be associated to the key of value: {#link BaseDao#TABLE_NAME}
* Optionally, if you want your save methods to return a generated primary key value
* you should include an entry referencing the the generated column name. This entry
* must then be associated to the key of value: {#link BaseDao#GENERATED_KEY}
* #param obj The object to be transformed
* #return the result of this object transformation
*/
Map<String, Object> mapObject(T obj);
}
If you really want to use SQL in your code, you can use:
org.springframework.jdbc.core.namedparam.NamedParameterJdbcOperations#(String sql, SqlParameterSource paramSource)
where your SQL string would be something like this:
insert into SOME_TABLE(COL1,COL2,COL3) values (:col1Val,:col2Val,:col3Val)
and your SqlParameterSource is built this way:
MapSqlParameterSource params = new MapSqlParameterSource();
params.addValue("col1Val", val1);
params.addValue("col2Val", val2);
params.addValue("col3Val", val3);
I hope this helps
You can use parameterized SQL to make it a bit simpler
Your code would look something like this
String sql = "INSERT INTO ing(field1, field2, field3) values(?, ?, ?)";
Object[] params=new Object[]{ing.getField1(),ing.getField2(),ing.getField3()};
getJdbcTemplate().update(sql,params);
Related
I have a test case which tries to query Child entity changes by its instanceId, it throws an Exception:
#TypeName("EntityOne")
class EntityOne {
#Id int id
String name
List<EntityTwo> entityTwos
EntityOne(int id, String name, List<EntityTwo> entityTwos) {
this.id = id
this.name = name
this.entityTwos = entityTwos
}
}
#TypeName("EntityTwo")
class EntityTwo {
#Id int id
String name
#Id int entityOneId
EntityTwo(int id, String name, entityOneId) {
this.id = id
this.name = name
this.entityOneId = entityOneId
}
}
These are the data audited
oldOne = new EntityOne(1, "EntityOne", [new EntityTwo(1, "EntityTwo",1)])
newOne = new EntityOne(1, "EntityOne", [new EntityTwo(1, "EntityTwo",1),
new EntityTwo(2, "EntityTwoOne",1)])
This is query throwing exception
entityTwoChanges = javers.findChanges(QueryBuilder.byInstanceId(1, EntityTwo) // Error is thrown
.withNewObjectChanges()
.withChildValueObjects()
.build())
Exception:
java.lang.Integer cannot be cast to java.util.Map
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.util.Map
at org.javers.core.metamodel.type.InstanceIdFactory.dehydratedLocalId(InstanceIdFactory.java:48)
at org.javers.core.metamodel.type.InstanceIdFactory.create(InstanceIdFactory.java:22)
at org.javers.core.metamodel.type.EntityType.createIdFromInstanceId(EntityType.java:127)
at org.javers.core.metamodel.object.GlobalIdFactory.createInstanceId(GlobalIdFactory.java:115)
at org.javers.core.metamodel.object.GlobalIdFactory.createFromDto(GlobalIdFactory.java:127)
at org.javers.repository.jql.FilterDefinition$IdFilterDefinition.compile(FilterDefinition.java:27)
at org.javers.repository.jql.JqlQuery.compile(JqlQuery.java:120)
at org.javers.repository.jql.QueryCompiler.compile(QueryCompiler.java:16)
at org.javers.repository.jql.ChangesQueryRunner.queryForChanges(ChangesQueryRunner.java:20)
at org.javers.repository.jql.QueryRunner.queryForChanges(QueryRunner.java:48)
at org.javers.core.JaversCore.findChanges(JaversCore.java:196)
at com.example.CaseQueryByCompositeKey.should able to query audit changes by composite key(CaseQueryByCompositeKey.groovy:60)
and also Is there way to query by composite key in JaVers?
It worked after passing instanceId as map:
entityTwoChanges = javers.findChanges(QueryBuilder.byInstanceId([id: 1, entityOneId: 1], EntityTwo)
.withNewObjectChanges()
.withChildValueObjects()
.build())
It's explicitly shown in the javadoc
/**
* Query for selecting Changes, Snapshots or Shadows for a given Entity instance.
* <br/><br/>
*
* For example, last Changes on "bob" Person:
* <pre>
* javers.findChanges( QueryBuilder.byInstanceId("bob", Person.class).build() );
* </pre>
*
* #param localId Value of an Id-property. When an Entity has Composite-Id (more than one Id-property) —
* <code>localId</code> should be <code>Map<String, Object></code> with
* Id-property name to value pairs.
* #see CompositeIdExample.groovy
*/
public static QueryBuilder byInstanceId(Object localId, Class entityClass){
Validate.argumentsAreNotNull(localId, entityClass);
return new QueryBuilder(new IdFilterDefinition(instanceId(localId, entityClass)));
}
try to read the javadoc first, before asking questins about a method
I have a pojo class ,which contains some fields like below and each field contains some value.
So, how can i insert each field in different rows as Key,Value pair
private String hostName;
private String sharedAccessKey;
private String azureMongoDbUri;
private String sharedAccessKeyName;
private String azureDatabase;
private String azureCollection;
ColumnName = property_key | property_value
records = hostName | pojo.getHostName();
sharedAccessKey| pojo.getSharedAccessKey();
and so on.
This is an example of how to insert a record to the database using the JdbcTemplate class provided by the Spring Framework. The JdbcTemplate class is the central class in the JDBC core package. It simplifies the use of JDBC and helps to avoid common errors. It executes core JDBC workflow, leaving application code to provide SQL and extract results. This class executes SQL queries or updates, initiating iteration over ResultSets and catching JDBC exceptions. Inserting a record to the database with JdbcTemplate class implies that you should:
Use the DataSource class, a utility class that provides connection to the database. It is part of the JDBC specification and allows a container or a framework to hide connection pooling and transaction management issues from the application code. We implement it using the org.springframework.jdbc.datasource.DriverManagerDataSource. Set the credentials needed to the datasource, using the inherited methods setPassword(String password), setUrl(String url) and setUsername(String username) API methods of AbstractDriverBasedDataSource class, as also the setDriverClassName(String driverClassName) API method of DriverManagerDataSource. Create a new Datasource object having the above configuration. Here in getDatasource() method we create a new Datasource and configure it.
Create a new JdbcTemplate object, with the given datasource to obtain connections from.
Use the update(String sql, Object args, int[] argTypes) API method of JdbcTemplate, to issue the SQL insert operation via a prepared statement, binding the given arguments. The given parameters are the String containing the sql query, the arguments to bind the query, and the types of the arguments. It returns the number of rows processed by the executed query.
Let’s take a look at the code snippet that follows:
package com.javacodegeeks.snippets.enterprise;
import java.sql.Types;
import java.util.Date;
import javax.sql.DataSource;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.datasource.DriverManagerDataSource;
public class InsertRecordInDatabaseWithJdbcTemplate {
private static final String driverClassName = "com.mysql.jdbc.Driver";
private static final String url = "jdbc:mysql://localhost/companydb";
private static final String dbUsername = "jcg";
private static final String dbPassword = "jcg";
private static final String insertSql =
"INSERT INTO employee (" +
" name, " +
" surname, " +
" title, " +
" created) " +
"VALUES (?, ?, ?, ?)";
private static DataSource dataSource;
public static void main(String[] args) throws Exception {
dataSource = getDataSource();
saveRecord("John", "Black", "Software developer", new Date());
saveRecord("Tom", "Green", "Project Manager", new Date());
}
public static void saveRecord(String name, String surname, String title, Date created) {
JdbcTemplate template = new JdbcTemplate(dataSource);
// define query arguments
Object[] params = new Object[] { name, surname, title, new Date() };
// define SQL types of the arguments
int[] types = new int[] { Types.VARCHAR, Types.VARCHAR, Types.VARCHAR, Types.TIMESTAMP };
// execute insert query to insert the data
// return number of row / rows processed by the executed query
int row = template.update(insertSql, params, types);
System.out.println(row + " row inserted.");
}
public static DriverManagerDataSource getDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(driverClassName);
dataSource.setUrl(url);
dataSource.setUsername(dbUsername);
dataSource.setPassword(dbPassword);
return dataSource;
}
}
First experiments with Spring Data and MongoDB were great. Now I've got the following structure (simplified):
public class Letter {
#Id
private String id;
private List<Section> sections;
}
public class Section {
private String id;
private String content;
}
Loading and saving entire Letter objects/documents works like a charm. (I use ObjectId to generate unique IDs for the Section.id field.)
Letter letter1 = mongoTemplate.findById(id, Letter.class)
mongoTemplate.insert(letter2);
mongoTemplate.save(letter3);
As documents are big (200K) and sometimes only sub-parts are needed by the application: Is there a possibility to query for a sub-document (section), modify and save it?
I'd like to implement a method like
Section s = findLetterSection(letterId, sectionId);
s.setText("blubb");
replaceLetterSection(letterId, sectionId, s);
And of course methods like:
addLetterSection(letterId, s); // add after last section
insertLetterSection(letterId, sectionId, s); // insert before given section
deleteLetterSection(letterId, sectionId); // delete given section
I see that the last three methods are somewhat "strange", i.e. loading the entire document, modifying the collection and saving it again may be the better approach from an object-oriented point of view; but the first use case ("navigating" to a sub-document/sub-object and working in the scope of this object) seems natural.
I think MongoDB can update sub-documents, but can SpringData be used for object mapping? Thanks for any pointers.
I figured out the following approach for slicing and loading only one subobject. Does it seem ok? I am aware of problems with concurrent modifications.
Query query1 = Query.query(Criteria.where("_id").is(instance));
query1.fields().include("sections._id");
LetterInstance letter1 = mongoTemplate.findOne(query1, LetterInstance.class);
LetterSection emptySection = letter1.findSectionById(sectionId);
int index = letter1.getSections().indexOf(emptySection);
Query query2 = Query.query(Criteria.where("_id").is(instance));
query2.fields().include("sections").slice("sections", index, 1);
LetterInstance letter2 = mongoTemplate.findOne(query2, LetterInstance.class);
LetterSection section = letter2.getSections().get(0);
This is an alternative solution loading all sections, but omitting the other (large) fields.
Query query = Query.query(Criteria.where("_id").is(instance));
query.fields().include("sections");
LetterInstance letter = mongoTemplate.findOne(query, LetterInstance.class);
LetterSection section = letter.findSectionById(sectionId);
This is the code I use for storing only a single collection element:
MongoConverter converter = mongoTemplate.getConverter();
DBObject newSectionRec = (DBObject)converter.convertToMongoType(newSection);
Query query = Query.query(Criteria.where("_id").is(instance).and("sections._id").is(new ObjectId(newSection.getSectionId())));
Update update = new Update().set("sections.$", newSectionRec);
mongoTemplate.updateFirst(query, update, LetterInstance.class);
It is nice to see how Spring Data can be used with "partial results" from MongoDB.
Any comments highly appreciated!
I think Matthias Wuttke's answer is great, for anyone looking for a generic version of his answer see code below:
#Service
public class MongoUtils {
#Autowired
private MongoTemplate mongo;
public <D, N extends Domain> N findNestedDocument(Class<D> docClass, String collectionName, UUID outerId, UUID innerId,
Function<D, List<N>> collectionGetter) {
// get index of subdocument in array
Query query = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query.fields().include(collectionName + "._id");
D obj = mongo.findOne(query, docClass);
if (obj == null) {
return null;
}
List<UUID> itemIds = collectionGetter.apply(obj).stream().map(N::getId).collect(Collectors.toList());
int index = itemIds.indexOf(innerId);
if (index == -1) {
return null;
}
// retrieve subdocument at index using slice operator
Query query2 = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query2.fields().include(collectionName).slice(collectionName, index, 1);
D obj2 = mongo.findOne(query2, docClass);
if (obj2 == null) {
return null;
}
return collectionGetter.apply(obj2).get(0);
}
public void removeNestedDocument(UUID outerId, UUID innerId, String collectionName, Class<?> outerClass) {
Update update = new Update();
update.pull(collectionName, new Query(Criteria.where("_id").is(innerId)));
mongo.updateFirst(new Query(Criteria.where("_id").is(outerId)), update, outerClass);
}
}
This could for example be called using
mongoUtils.findNestedDocument(Shop.class, "items", shopId, itemId, Shop::getItems);
mongoUtils.removeNestedDocument(shopId, itemId, "items", Shop.class);
The Domain interface looks like this:
public interface Domain {
UUID getId();
}
Notice: If the nested document's constructor contains elements with primitive datatype, it is important for the nested document to have a default (empty) constructor, which may be protected, in order for the class to be instantiatable with null arguments.
Solution
Thats my solution for this problem:
The object should be updated
#Getter
#Setter
#Document(collection = "projectchild")
public class ProjectChild {
#Id
private String _id;
private String name;
private String code;
#Field("desc")
private String description;
private String startDate;
private String endDate;
#Field("cost")
private long estimatedCost;
private List<String> countryList;
private List<Task> tasks;
#Version
private Long version;
}
Coding the Solution
public Mono<ProjectChild> UpdateCritTemplChild(
String id, String idch, String ownername) {
Query query = new Query();
query.addCriteria(Criteria.where("_id")
.is(id)); // find the parent
query.addCriteria(Criteria.where("tasks._id")
.is(idch)); // find the child which will be changed
Update update = new Update();
update.set("tasks.$.ownername", ownername); // change the field inside the child that must be updated
return template
// findAndModify:
// Find/modify/get the "new object" from a single operation.
.findAndModify(
query, update,
new FindAndModifyOptions().returnNew(true), ProjectChild.class
)
;
}
Can anyone help me with an example of ColumnMapRowMapper? How to use it?
I've written an answer in my blog, http://selvam2day.blogspot.com/2013/06/singlecolumnrowmapper.html, but here it is for your convenience below:
SingleColumnRowMapper & ColumnMapRowMapper examples in Spring
Spring JDBC includes two default implementations of RowMapper - SingleColumnRowMapper and ColumnMapRowMapper. Below are sample usages of those row mappers.
There are lots of situations when you just want to select one column or only a selected set of columns in your application, and to write custom row mapper implementations for these scenarios doesn't seem right. In these scenarios, we can make use of the spring-provided row mapper implementations.
SingleColumnRowMapper
This class implements the RowMapper interface. As the name suggests, this class can be used to retrieve a single value from the database as a java.util.List. The list contains the column values one per each row.
In the code snippet below, the type of the result value for each row is specified by the constructor argument. It can also be specified by invoking the setRequiredType(Class<T> requiredType) method.
public List getFirstName(int userID)
{
String sql = "select firstname from users where user_id = " + userID;
SingleColumnRowMapper rowMapper = new SingleColumnRowMapper(String.class);
List firstNameList = (List) getJdbcTemplate().query(sql, rowMapper);
for(String firstName: firstNameList)
System.out.println(firstName);
return firstNameList;
}
More information on the class and its methods can be found in the spring javadoc link below.
http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/jdbc/core/SingleColumnRowMapper.html
ColumnMapRowMapper
ColumnMapRowMapper class can be used to retrieve more than one column from a database table. This class also implements the RowMapper interface. This class creates a java.util.Map for each row, representing all columns as key-value pairs: one entry for each column, with the column name as key.
public List<Map<String, Object>> getUserData(int userID)
{
String sql = "select firstname, lastname, dept from users where userID = ? ";
ColumnMapRowMapper rowMapper = new ColumnMapRowMapper();
List<Map<String, Object>> userDataList = getJdbcTemplate().query(sql, rowMapper, userID);
for(Map<String, Object> map: userDataList){
System.out.println("FirstName = " + map.get("firstname"));
System.out.println("LastName = " + map.get("lastname"));
System.out.println("Department = " + map.get("dept"));
}
return userDataList;
}
More information on the class and its methods can be found in the spring javadoc link below.
http://static.springsource.org/spring/docs/3.0.x/javadoc-api/org/springframework/jdbc/core/ColumnMapRowMapper.html
I have an Oracle XMLType column that stores the various language specific strings. I need to construct a Hibernate criteria that orders on this column. In order to do this, I need to extract the value with an Oracle function. This criteria is generated automatically by code I have written but I cannot, for the life of me, figure out how to extract the value and order on it via the criteria API. Basically, the generated SQL should look something like:
SELECT EXTRACTVALUE(title, '//value[#lang="EN"]') AS enTitle
FROM domain_object
ORDER BY enTitle
I fiddled with projections momentarily, but they appear to execute a second select. Which I assume would cause hibernate to select ALL values and in memory sort them based on the projection? This would be very undesirable =\
Ok, I found a solution. Not sure this is the best, so I will leave it open for a little while if some one wants to provide a better answer / refine my solution.
What I did was extend org.hibernate.criterion.Order thusly:
package com.mycorp.common.hibernate;
import org.hibernate.Criteria;
import org.hibernate.HibernateException;
import org.hibernate.criterion.CriteriaQuery;
import org.hibernate.criterion.Order;
import org.hibernate.engine.SessionFactoryImplementor;
import com.mycorp.LocalizationUtil;
public class LocalStringOrder extends Order {
private static final long serialVersionUID = 1L;
private boolean ascending;
private String propName;
public LocalStringOrder(String prop, boolean asc) {
super(prop, asc);
ascending = asc;
propName = prop;
}
public String toSqlString(Criteria criteria, CriteriaQuery criteriaQuery) throws HibernateException {
String[] columns = criteriaQuery.getColumnsUsingProjection(criteria, propName);
StringBuffer fragment = new StringBuffer();
for ( int i=0; i<columns.length; i++ ) {
SessionFactoryImplementor factory = criteriaQuery.getFactory();
fragment.append( factory.getDialect().getLowercaseFunction() )
.append('(');
fragment.append("EXTRACTVALUE(");
fragment.append( columns[i] );
fragment.append(", '//value[#lang=\"" +
LocalizationUtil.getPreferedLanguage().name() +
"\"')");
fragment.append(')');
fragment.append( ascending ? " asc" : " desc" );
if ( i<columns.length-1 ) fragment.append(", ");
}
return fragment.toString();
}
public static Order asc(String propertyName) {
return new LocalStringOrder(propertyName, true);
}
public static Order desc(String propertyName) {
return new LocalStringOrder(propertyName, false);
}
}
Then it was just a matter of saying criteria.addOrder(LocalStringOrder.asc('prop')).
Another general solution is NativeSQLOrder, see http://opensource.atlassian.com/projects/hibernate/browse/HHH-2381. I do not undestand, why this feature is not in Hibernate yet.