Mulitple column name in spring data - spring

Model
#Column(name="Desc", name="des", name="DS")
private String description;
How can I mention multiple name of column ? so, that if any one found it map value to description?

How can I mention multiple name of column ?
You can't. A database does not allow columns to have multiple names, hence you can't map multiple column names to a class field.
so, that if any one found it map value to description?
If you have multiple stored procedures that return in their respective result set "Desc", "des", "ds" and you need to map this to the same Java class - you need to define different row mappers and describe the mapping there.
For example let's say you have SP1 and SP2 and you want both result sets from those to be mapped to ResultDto.
ResultDto looks like:
public class ResultDto {
private String name; //always maps to DB column "name"
private String desc; //maps to different DB columns - "ds", "desc"
//ommitted..
}
You can define a base row mapper and define the mapping for all overlapping fields of all Stored Procs result sets.
Code Example:
protected static abstract class BaseRowMapper implements RowMapper<ResultDto> {
public abstract ResultDto mapRow(ResultSet rs, int rowNum) throws SQLException;
protected void mapBase(ResultSet rs, ResultDto resultDto) throws SQLException {
resultDto.setName(rs.getString("name")); //map all overlapping fields/columns here
}
}
private static class SP1RowMapper extends BaseRowMapper {
#Override
public ResultDto mapRow(ResultSet rs, int rowNum) throws SQLException {
ResultDto resultDto = new ResultDto();
mapBase(rs, resultDto);
resultDto.setDescription(rs.getString("ds"));
return resultDto;
}
}
private static class SP2RowMapper extends BaseRowMapper {
#Override
public ResultDto mapRow(ResultSet rs, int rowNum) throws SQLException {
ResultDto resultDto = new ResultDto();
mapBase(rs, resultDto);
resultDto.setDescription(rs.getString("desc"));
return resultDto;
}
}
I don't know how you call the Stored Procedures, but if you use Spring's SimpleJdbcCall, the code will look like:
new SimpleJdbcCall(datasource)
.withProcedureName("SP NAME")
.declareParameters(
//Stored Proc params
)
.returningResultSet("result set id", rowMapperInstance);

Related

How to get the type of values from a JDBC query?

I have a spring batch job that takes in a user query, executes that query to find the selected items, and then I want to insert those items into another database. The problem is that I have to convert elements like dates from the resulting query to insert them again. How can I tell the type of the values returned from the query?
This is what I use to read the items which works properly.
#Bean("querySelectiveItems")
#StepScope
public JdbcCursorItemReader querySelectiveItems(#Qualifier("selectiveSourceDatabase") DataSource dataSource,
#Value("#{jobExecutionContext[" + EtlConfiguration.JOB_PARM_MIGRATION_CONFIG + "]}") MigrationDefinition migrationDefinition
) {
JdbcCursorItemReader reader = new JdbcCursorItemReader<>();
reader.setSql(migrationDefinition.getMigrations().getTable().getQuery());
reader.setDataSource(dataSource);
reader.setRowMapper(new ColumnMapRowMapper());
log.info("Queried for items");
return reader;
}
The following is what I wrote to write to the destination database. The problem is that the values I have to insert are unknown because they are the result of a user query. For example if there is a datatype in my insert statement I must have a TO_DATE around the date value. Is there a way to do this?
#Component
#Lazy
class InsertSelectedItems implements ItemWriter<Map<String, Object>> {
private MigrationDefinition migrationDefinition;
private JdbcTemplate destinationTemplate;
public void setDestinationTemplate(JdbcTemplate destinationTemplate) {
this.destinationTemplate = destinationTemplate;
}
public void setMigrationDefinition(MigrationDefinition migrationDefinition) {
this.migrationDefinition = migrationDefinition;
}
#Override
public void write(List<? extends Map<String, Object>> items) throws Exception {
ArrayList<String> columns = new ArrayList<>();
ArrayList<String> values = new ArrayList<>();
System.out.println(items);
for (Map<String, Object> map : items) {
for (Map.Entry<String, Object> entry : map.entrySet()) {
columns.add(entry.getKey());
values.add(String.valueOf(entry.getValue()));
}
}
String sql = String.format("%s ( %s ) VALUES ( %s ) ",
migrationDefinition.getMigrations().getTable().getInsert(),
String.join(",", columns),
String.join(",", values));
log.info(sql);
destinationTemplate.update(sql);
}
}

How to convert Oracle user defined Type into java object in spring jdbc stored procedure

I am working on springjdbcTemplate, and all db call will be done through stored procedures. In Oracle 11g I have created one user defined type containing with other type as field inside it as below.
create or replace type WORKER AS Object (NAME VARCHAR2(30),
age NUMBER);
create or replace type WORKER_LIST IS TABLE OF WORKER;
create or replace type MANAGER AS Object(
NAME VARCHAR2(30),
workers WORKER_LIST
);
And at Java side I have created the classes as follows.
public class Worker implements SQLData {
private String name;
private int age;
#Override
public String getSQLTypeName() throws SQLException {
return "WORKER";
}
#Override
public void readSQL(SQLInput stream, String typeName) throws SQLException {
setName(stream.readString());
setAge(stream.readInt());
}
#Override
public void writeSQL(SQLOutput stream) throws SQLException {
stream.writeString(getName());
stream.writeInt(getAge());
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
}
public class Manager implements SQLData {
private String name;
private List<Worker> workers;
#Override
public String getSQLTypeName() throws SQLException {
return "Manager";
}
#Override
public void readSQL(SQLInput stream, String typeName) throws SQLException {
setName(stream.readString());
setWorkers((List<Worker>) stream.readObject());
}
#Override
public void writeSQL(SQLOutput stream) throws SQLException {
stream.writeString(getName());
stream.writeObject((SQLData) getWorkers());
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public List<Worker> getWorkers() {
return workers;
}
public void setWorkers(List<Worker> workers) {
this.workers = workers;
}
}
I have mentioned in typeMap about the mappings.
But I am not getting expected results.
Worker type is returned as Struct and List<Worker> is returned as array.
Please let me know what should I have do and what is the standard protocol to get the expected object as I mentioned above. I'm new to JDBCTemplate. Please suggest.
Thanks
Ram
I think I've managed to get something working.
You mentioned something about the connection's type map. When using Spring it's difficult to get hold of the database connection in order to add the types to the connection's type map, so I'm not sure what you mean when you write 'I have mentioned in typeMap about the mappings'.
Spring offers one way to add an entry to the connection's type map, in the form of the class SqlReturnSqlData. This can be used to call a stored procedure or function which returns a user-defined type. It adds an entry to the connection's type map to specify the database type of the object and the class to map this object to just before it retrieves a value from a CallableStatement. However, this only works if you only need to map a single type. You have two such types that need mapping: MANAGER and WORKER.
Fortunately, it's not difficult to come up with a replacement for SqlReturnSqlData that can add more than one entry to the connection's type map:
import org.springframework.jdbc.core.SqlReturnType;
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.Map;
public class SqlReturnSqlDataWithAuxiliaryTypes implements SqlReturnType {
private Class<?> targetClass;
private Map<String, Class<?>> auxiliaryTypes;
public SqlReturnSqlDataWithAuxiliaryTypes(Class<?> targetClass, Map<String, Class<?>> auxiliaryTypes) {
this.targetClass = targetClass;
this.auxiliaryTypes = auxiliaryTypes;
}
#Override
public Object getTypeValue(CallableStatement cs, int paramIndex, int sqlType, String typeName) throws SQLException {
Connection con = cs.getConnection();
Map<String, Class<?>> typeMap = con.getTypeMap();
typeMap.put(typeName, this.targetClass);
typeMap.putAll(auxiliaryTypes);
Object o = cs.getObject(paramIndex);
return o;
}
}
The above has been adapted from the source of SqlReturnSqlData. All I've really done is added an extra field auxiliaryTypes, the contents of which gets added into the connection's type map in the call to getTypeValue().
I also needed to adjust the readSQL method of your Manager class. The object you read back from the stream will be an implementation of java.sql.Array. You can't just cast this to a list. Sadly, getting this out is a little fiddly:
#Override
public void readSQL(SQLInput stream, String typeName) throws SQLException {
setName(stream.readString());
Array array = (Array) stream.readObject();
Object[] objects = (Object[]) array.getArray();
List<Worker> workers = Arrays.stream(objects).map(o -> (Worker)o).collect(toList());
setWorkers(workers);
}
(If you're not using Java 8, replace the line with Arrays.stream(...) with a loop.)
To test this I wrote a short stored function to return a MANAGER object:
CREATE OR REPLACE FUNCTION f_get_manager
RETURN manager
AS
BEGIN
RETURN manager('Big Boss Man', worker_list(worker('Bill', 40), worker('Fred', 36)));
END;
/
The code to call this stored function was then as follows:
Map<String, Class<?>> auxiliaryTypes = Collections.singletonMap("WORKER", Worker.class);
SimpleJdbcCall jdbcCall = new SimpleJdbcCall(jdbcTemplate)
.withSchemaName("my_schema")
.withFunctionName("f_get_manager")
.declareParameters(
new SqlOutParameter(
"return",
OracleTypes.STRUCT,
"MANAGER",
new SqlReturnSqlDataWithAuxiliaryTypes(Manager.class, auxiliaryTypes)));
Manager manager = jdbcCall.executeFunction(Manager.class);
// ... do something with manager.
This worked, in that it returned a Manager object with two Workers in it.
Finally, if you have stored procedures that save a Manager object to the database, be aware that your Manager class's writeSQL method will not work. Unless you've written your own List implementation, List<Worker> cannot be casted to SQLData. Instead, you'll need to create an Oracle array object and put the entries in that. That however is awkward because you'll need the database connection to create the array, but that won't be available in the writeSQL method. See this question for one possible solution.

About spring Rowmapper , mapRow

I have a some questions about Spring rowmapper. I'm going to receive data from my DB using rowmapper.But my command object 'Table' only have List variable.
is Rowmapper automatically map each record to List ? is it posibble?
i know spring bind tag is automatically bind value to list.
right this.
Table.java
public class Table implements Serializable{
private List<String> tableNum = new ArrayList<String>();
// setter and getter
}
Dao
private class TableRowMapper implements RowMapper {
public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
Table table = new Table();
table.setTableNum(rs.getString("TABLE_LOCATION"));
return table;
}
}
The RowMapper is used to map a single row to a single domain object, not a bunch of rows results to a single domain object. Also the RowMapper isn't a Dao type object. It is to be used with some query method, like JdbcTemplate.query(sql,args,rowMapper)
But in your case, you don't want a RowMapper. You should instead just use a JdbcTemplate.queryForList. See the JdbcTemplate API for more query method. A simple example would be something like:
public class YourDaoImpl extends JdbcTemplate implements YourDao {
private static final String SQL =
"select SOME_FIELD from SOME_TABLE where SOMETHING = ?";
#Override
public List<String> getSomeFieldBySomething(String something) {
return (List<String>)queryForList( SQL,
new Object[] { something },
String.class);
}
}
You use the dao for your services.
UPDATE
Because of your help, I can get a one column from my DB. but I got a problems. my db table is made of multiple columns. and i must receive all of them. and.. how can i do it? plz help me~!!!
You posted question in no points that out. In this case you need to make a List<DomainObject>. Not a List<String>. List<String> only allows for one value. If you have a List<DomainObject>, then the class DomainObject can have all your fields. Then that's when you use the RowMapper. And you can still use queryForList that uses the RowMapper variant
public class Table {
private String field1;
private String field2;
private String field3;
// getters and setters
}
public class YourDaoImpl extends JdbcTemplate implements YourDao {
private static final String SQL =
"select * from SOME_TABLE where SOMETHING = ?";
#Override
public List<Table> getTableBySomething(String something) {
return (List<Table>)queryForList( SQL,
new Object[] { something },
new RowMapper<Table>(){
#Override
public Table mapRow(ResultSet rs, int rowNumber) {
Table table = new Table();
table.setField1(rs.getString("feild1"));
// set the others
return table;
}
});
}
}
An aside, if I were you, I would forget the jdbc and go for an ORM framework like JPA. If you want entire domain objects, this is the way to go.

Hibernate CompositeUserType mapping has wrong number of columns

I am new to Hibernate. Writing a CompositeUserType. When I run the code I am getting error.
property
mapping has wrong number of columns:
Please help me what am I missing?
My CompositeUserType goes as follows
public class EncryptedAsStringType implements CompositeUserType {
#Override
public String[] getPropertyNames() {
return new String[] { "stockId", "stockCode", "stockName","stockDescription" };
}
#Override
public Type[] getPropertyTypes() {
//stockId, stockCode,stockName,modifiedDate
return new Type[] {
Hibernate.INTEGER, Hibernate.STRING, Hibernate.STRING,Hibernate.STRING
};
}
#Override
public Object getPropertyValue(final Object component, final int property)
throws HibernateException {
Object returnValue = null;
final Stock auditData = (Stock) component;
if (0 == property) {
returnValue = auditData.getStockId();
} else if (1 == property) {
returnValue = auditData.getStockCode();
} else if (2 == property) {
returnValue = auditData.getStockName();
} return returnValue;
}
#Override
public void setPropertyValue(final Object component, final int property,
final Object setValue) throws HibernateException {
final Stock auditData = (Stock) component;
}
#Override
public Object nullSafeGet(final ResultSet resultSet,
final String[] names,
final SessionImplementor paramSessionImplementor, final Object paramObject)
throws HibernateException, SQLException {
//owner here is of type TestUser or the actual owning Object
Stock auditData = null;
final Integer createdBy = resultSet.getInt(names[0]);
//Deferred check after first read
if (!resultSet.wasNull()) {
auditData = new Stock();
System.out.println(">>>>>>>>>>>>"+resultSet.getInt(names[1]));
System.out.println(">>>>>>>>>>>>"+resultSet.getString(names[2]));
System.out.println(">>>>>>>>>>>>"+resultSet.getString(names[3]));
System.out.println(">>>>>>>>>>>>"+resultSet.getString(names[4]));
}
return auditData;
}
#Override
public void nullSafeSet(final PreparedStatement preparedStatement,
final Object value, final int property,
final SessionImplementor sessionImplementor)
throws HibernateException, SQLException {
if (null == value) {
} else {
final Stock auditData = (Stock) value;
System.out.println("::::::::::::::::::::::::::::::::"+auditData.getStockCode());
System.out.println("::::::::::::::::::::::::::::::::"+auditData.getStockDescription());
System.out.println("::::::::::::::::::::::::::::::::"+auditData.getStockId());
System.out.println("::::::::::::::::::::::::::::::::"+auditData.getStatus());
}
}
My Domain class Stock has five attributes. (stockId,stockCode,StockName,Status , Stock
Description)
I need to declare the field Stock description as Composite field Type.
private Integer stockId;
private String stockCode;
private String stockName;
private String status;
private String stockDescription;
//Constructors
#Column(name = "STOCK_CC", unique = true, nullable = false, length = 20)
#Type(type="com.mycheck.EncryptedAsStringType")
#Columns(columns = { #Column(name="STOCK_ID"),
#Column(name="STOCK_CODE"),
#Column(name="STOCK_NAME")
})
public String getStockDescription() {
return stockDescription;
}
}
When I try to execute a insert for Stock. I am getting the error Error creating bean with name
'sessionFactory' defined in class path resource [spring/config/../database/Hibernate.xml]:
Invocation of init method failed. nested exception is org.hibernate.MappingException:
property mapping has wrong number of columns: com.stock.model.Stock.stockDescription type:
com.mycheck.EncryptedAsStringType
Where am I going wrong ?
One can extract the answer from the code samples and the comments to the original question, but to save everyone some reading, I've compiled a quick summary.
If you declare a CompositeUserType that maps a type to n columns, you have to declare n columns in #Columns besides the #Type annotation. Example:
public class EncryptedAsStringType implements CompositeUserType {
#Override
public String[] getPropertyNames() {
return new String[] { "stockId", "stockCode", "stockName","stockDescription" };
}
// ...
}
This CompositeUserType maps to 4 separate columns, therefore 4 separate #Column annotations have to be declared:
#Type(type="com.mycheck.EncryptedAsStringType")
#Columns(columns = {
#Column(name="STOCK_ID"),
#Column(name="STOCK_CODE"),
#Column(name="STOCK_NAME"),
#Column(name="STOCK_DESCRIPTION")
})
public String getStockDescription() {
return stockDescription;
}
That's it and Hibernate is happy.

Hadoop/MapReduce: Reading and writing classes generated from DDL

Can someone walk me though the basic work-flow of reading and writing data with classes generated from DDL?
I have defined some struct-like records using DDL. For example:
class Customer {
ustring FirstName;
ustring LastName;
ustring CardNo;
long LastPurchase;
}
I've compiled this to get a Customer class and included it into my project. I can easily see how to use this as input and output for mappers and reducers (the generated class implements Writable), but not how to read and write it to file.
The JavaDoc for the org.apache.hadoop.record package talks about serializing these records in Binary, CSV or XML format. How do I actually do that? Say my reducer produces IntWritable keys and Customer values. What OutputFormat do I use to write the result in CSV format? What InputFormat would I use to read the resulting files in later, if I wanted to perform analysis over them?
Ok, so I think I have this figured out. I'm not sure if it is the most straight-forward way, so please correct me if you know a simpler work-flow.
Every class generated from DDL implements the Record interface, and consequently provides two methods:
serialize(RecordOutput out) for writing
deserialize(RecordInput in) for reading
RecordOutput and RecordInput are utility interfaces provided in the org.apache.hadoop.record package. There are a few implementations (e.g. XMLRecordOutput, BinaryRecordOutput, CSVRecordOutput)
As far as I know, you have to implement your own OutputFormat or InputFormat classes to use these. This is fairly easy to do.
For example, the OutputFormat I talked about in the original question (one that writes Integer keys and Customer values in CSV format) would be implemented like this:
private static class CustomerOutputFormat
extends TextOutputFormat<IntWritable, Customer>
{
public RecordWriter<IntWritable, Customer> getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
throws IOException {
Path file = FileOutputFormat.getTaskOutputPath(job, name);
FileSystem fs = file.getFileSystem(job);
FSDataOutputStream fileOut = fs.create(file, progress);
return new CustomerRecordWriter(fileOut);
}
protected static class CustomerRecordWriter
implements RecordWriter<IntWritable, Customer>
{
protected DataOutputStream outStream ;
public AnchorRecordWriter(DataOutputStream out) {
this.outStream = out ;
}
public synchronized void write(IntWritable key, Customer value) throws IOException {
CsvRecordOutput csvOutput = new CsvRecordOutput(outStream);
csvOutput.writeInteger(key.get(), "id") ;
value.serialize(csvOutput) ;
}
public synchronized void close(Reporter reporter) throws IOException {
outStream.close();
}
}
}
Creating the InputFormat is much the same. Because the csv format is one entry per line, we can use a LineRecordReader internally to do most of the work.
private static class CustomerInputFormat extends FileInputFormat<IntWritable, Customer> {
public RecordReader<IntWritable, Customer> getRecordReader(
InputSplit genericSplit,
JobConf job,
Reporter reporter)
throws IOException {
reporter.setStatus(genericSplit.toString());
return new CustomerRecordReader(job, (FileSplit) genericSplit);
}
private class CustomerRecordReader implements RecordReader<IntWritable, Customer> {
private LineRecordReader lrr ;
public CustomerRecordReader(Configuration job, FileSplit split)
throws IOException{
this.lrr = new LineRecordReader(job, split);
}
public IntWritable createKey() {
return new IntWritable();
}
public Customer createValue() {
return new Customer();
}
public synchronized boolean next(IntWritable key, Customer value)
throws IOException {
LongWritable offset = new LongWritable() ;
Text line = new Text() ;
if (!lrr.next(offset, line))
return false ;
CsvRecordInput cri = new CsvRecordInput(new
ByteArrayInputStream(line.toString().getBytes())) ;
key.set(cri.readInt("id")) ;
value.deserialize(cri) ;
return true ;
}
public float getProgress() {
return lrr.getProgress() ;
}
public synchronized long getPos() throws IOException {
return lrr.getPos() ;
}
public synchronized void close() throws IOException {
lrr.close();
}
}
}

Resources