Spring jdbc template array type creation in batch statement - spring

Is it safe to get connection object from prepared statement in below code to create Array type?
is there any other efficient way of doing this?
String query = " update emp set status='ACTIVE' where empid = any(?)";
jdbcTemplate.batchUpdate(query,new BatchPreparedStatementSetter(){
#Override
public int getBatchSize(){
return empList.size();
}
#Override
public void setValues(final PreparedStatement ps,int i) throws SQLException{
Object a[] = empList.get(i);
Object[] arr1 = Arrays.stream(arr[0].toString().split(",")).toArray();
Array array = ps.getConnection().createArrayOf("VARCHAR", arr1 );
ps.setArray(1,array);
}
});

Related

FlatFileItemWriterBuilder-headerCallback() get number of rows written

Is it possible to get the total number of rows written from FlatFileItemWriter.headerCallback()?
I am a spring-batch nubee and I looked at putting count of lines into header of flat file and Spring Batch - Counting Processed Rows.
However I can't seem to implement the logic using the advice given there. It makes sense the writer count will only be available after the file is processed. However I am trying to get the row-count just before the file is officially written.
I tried to look for a hook like #AfterStep and grab the total rows, but I keep going in circles.
#Bean
#StepScope
public FlatFileItemWriter<MyFile> generateMyFileWriter(Long jobId,Date eventDate) {
String filePath = "C:\MYFILE\COMPLETED";
Resource file = new FileSystemResource(filePath);
DelimitedLineAggregator<MyFile> myFileLineAggregator = new DelimitedLineAggregator<>();
myFileLineAggregator.setDelimiter(",");
myFileLineAggregator.setFieldExtractor(getMyFileFieldExtractor());
return new FlatFileItemWriterBuilder<MyFile>()
.name("my-file-writer")
.resource(file)
.headerCallback(new MyFileHeaderWriter(file.getFilename()))
.lineAggregator(myFileLineAggregator)
.build();
}
private FieldExtractor<MyFile> getMyFileFieldExtractor() {
final String[] fieldNames = new String[]{
"typeRecord",
"idSystem"
};
return item -> {
BeanWrapperFieldExtractor<MyFile> extractor = new BeanWrapperFieldExtractor<>();
extractor.setNames(fieldNames);
return extractor.extract(item);
};
}
Notice I am using the MyFileHeaderWriter.java class(below) in the headerCallback(new MyFileHeaderWriter(file.getFilename())) (above). I am trying to initialize the value of qtyRecordsCreated below.
class MyFileHeaderWriter implements FlatFileHeaderCallback {
private final String header;
private String dtxCreated;
private String tmxCreated;
private String fileName;//15 byte file name private String qtyRecordsCreated;//number of rows in file including the header row
MyFileHeaderWriter(String sbfFileName) {
SimpleDateFormat dateCreated = new SimpleDateFormat("YYDDD");
SimpleDateFormat timeCreated = new SimpleDateFormat("HHMM");
Date now = new Date();
this.dtxCreated = dateCreated.format(now);
this.tmxCreated = timeCreated.format(now);
this.fileName = sbfFileName; this.qtyRecordsCreated="";
String[] headerValues = {dtxCreated,tmxCreated,fileName,qtyRecordsCreated};
this.header = String.join(",", headerValues);
}
#Override
public void writeHeader(Writer writer) throws IOException {
writer.write(header);
}
}
How can I get the number of rows in the header row?
Can the FlatFileFooterCallback be used to fetch the number of rows and then update the header with number of rows in the file afterwards?
You can achieve this in ItemProcessor, try this it work for me
public class EmployeeProcessor implements ItemProcessor<Employee, Employee> {
#Override
public Employee process(Employee employee) throws Exception {
return employee;
}
#AfterStep
public void afterStep(StepExecution stepExecution) {
ExecutionContext stepContext = stepExecution.getExecutionContext();
stepContext.put("count", stepExecution.getReadCount());
System.out.println("COUNT" + stepExecution.getReadCount());
}
}
And in you writer to get value
int count = stepContext.getInt("count");
Hope work for you

Hive UDTF - unable to fetch field names via getFieldName, it is returning _col0,_col1

I am working on Hive UDTF to transpose each row around primary key. as part of requirement, i need to associate column name and corresponding data with the key.
e.g.
Source Data
Customer_id Customer_name Customer_type
1000000 ABCD Individual
Hive UDTF will convert the data into following
Key att_name att_val
10000000 customer_name ABCD
10000000 customer_type Individual
i have written the udtf and it is working and currently producing below data
Key att_name att_val
10000000 _col0 ABCD
10000000 _col1 Individual
here is the code in which (StructField)inputFields.get(i)).getFieldName() is returning _col0 instead of customer_name.
could this be a probable defect in apache hive or there is yet another mapping for _col0 to actual schema that i should refer to.
public class transposeUDTF extends GenericUDTF {
private Map tableMap = new HashMap();
private MetadataListStructObjectInspector metadataDetails;
#Override
public StructObjectInspector initialize(StructObjectInspector args) throws UDFArgumentException {
List inputFields = args.getAllStructFieldRefs();
((StructObjectInspector)args).getTypeName();
for(int i = 0; i < inputFields.size(); ++i)
{
tableMap.put(i+1,((StructField)inputFields.get(i)).getFieldName());
}
return super.initialize(args);
}
#Override
public StructObjectInspector initialize(ObjectInspector[] argOIs) throws UDFArgumentException {
List<String> fieldNames = new ArrayList<String>(3);
List<ObjectInspector> fieldOIs = new ArrayList<ObjectInspector>(3);
fieldNames.add("key");
fieldNames.add("AttrName");
fieldNames.add("AttrVal");
fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
return ObjectInspectorFactory.getStandardStructObjectInspector(fieldNames, fieldOIs);
}
#Override
public void process(Object[] record) throws HiveException {
ArrayList<Object[]> results = new ArrayList<Object[]>();
for(int i = 1; i < record.length; ++i) {
results.add(new Object[] {record[0],tableMap.get(i).toString(),record[i]});
}
Iterator<Object[]> it = results.iterator();
while (it.hasNext()){
Object[] r = it.next();
forward(r);
}
}
#Override
public void close() throws HiveException {
// do nothing
}
}

Is it a good programming to pass Connection Object to a method?

I am doing a Insert Operation , i have a condition if company is 0 , then i need to perform an additional insert in another table ??
This is my code
public static String insertIntoDepotTable(DepotJSONBean depotbean) throws SQLException
{
Connection dbConnection = null;
PreparedStatement depotjsoninsertPst = null ;
try
{
dbConnection = DBConnectionOrientDepot.getDBConnection();
dbConnection.setAutoCommit(false);
String companyId = depotbean.getCompanyId();
if(companyId.equals("0"))
{
saveInCompany(depotbean , dbConnection);
}
String Insertsql = "INSERT INTO tbl_depot values (depotID,depoBelongsToID,stateID,districtID,talukMandalID,depotName,companyID,contactName,phone1,phone2,address,latitude,longititude,accuracy,town,noOfPeopleOperating,depotSize,storageCapacity,cAndFNames,depotPic1,depotPic2,comments,active,createdOn,modifiedOn) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)";
depotjsoninsertPst = dbConnection.prepareStatement(Insertsql);
}
catch(Exception e)
{
}
} // end of method
public String saveInCompany(DepotJSONBean djsonbean , Connection conn)
{
}

JdbcTemplate delete syntax

Can someone point out any mistake in my following code of Spring Jdbc Template?
When I click delete, the record is not getting deleted and there are no errors showing.
public void delete(String id) {
logger.debug("Deleting existing person");
// Prepare our SQL statement using Unnamed Parameters style
String query = "delete from person where id = ?";
// Assign values to parameters
Object[] person = new Object[] {id};
// Delete
jdbcTemplate.update(query, person);
}
Here is an example. Pay attention:
Integer id
public boolean delete(Integer id){
String sql = "DELETE FROM organization WHERE id = ?";
Object[] args = new Object[] {id};
return jdbcTemplate.update(sql, args) == 1;
}
#Override
public String deleteXXById(String id) {
String sql = "DELETE FROM VENUE WHERE id =:id?";
Map<String, Object> paramMap = new HashMap<String, Object>();
paramMap.put("id", id);
Object[] args = new Object[] {id};
int update = jdbcTemplate.update(sql, paramMap);
String updatecount = "Failed";
if (update == 0) {
updatecount = "Failed";
} else {
updatecount = "SUCCESS";
}
return updatecount;
}

How to call Oracle function or stored procedure using spring persistence framework?

I am using Spring persistence framework for my project.
I want to call oracle function or stored procedure from this framework.
Can anybody suggest how can I achieve this.
Please give solution for both * oracle function and *stored procedure.
Thanks.
Assuming you are referring to JdbcTemplate:
jdbcTemplate.execute(
new CallableStatementCreator() {
public CallableStatement createCallableStatement(Connection con) throws SQLException{
CallableStatement cs = con.prepareCall("{call MY_STORED_PROCEDURE(?, ?, ?)}");
cs.setInt(1, ...); // first argument
cs.setInt(2, ...); // second argument
cs.setInt(3, ...); // third argument
return cs;
}
},
new CallableStatementCallback() {
public Object doInCallableStatement(CallableStatement cs) throws SQLException{
cs.execute();
return null; // Whatever is returned here is returned from the jdbcTemplate.execute method
}
}
);
Calling a function is almost identical:
jdbcTemplate.execute(
new CallableStatementCreator() {
public CallableStatement createCallableStatement(Connection con) {
CallableStatement cs = con.prepareCall("{? = call MY_FUNCTION(?, ?, ?)}");
cs.registerOutParameter(1, Types.INTEGER); // or whatever type your function returns.
// Set your arguments
cs.setInt(2, ...); // first argument
cs.setInt(3, ...); // second argument
cs.setInt(4, ...); // third argument
return cs;
}
},
new CallableStatementCallback {
public Object doInCallableStatement(CallableStatement cs) {
cs.execute();
int result = cs.getInt(1);
return result; // Whatever is returned here is returned from the jdbcTemplate.execute method
}
}
);
Simpler way of calling a Oracle function in Spring is subclassing StoredProcedure like below
public class MyStoredProcedure extends StoredProcedure{
private static final String SQL = "package.function";
public MyStoredProcedure(DataSource ds){
super(ds,SQL);
declareParameter(new SqlOutParameter("param_out",Types.NUMERIC));
declareParameter(new SqlParameter("param_in",Types.NUMERIC));
setFunction(true);//you must set this as it distinguishes it from a sproc
compile();
}
public String execute(Long rdsId){
Map in = new HashMap();
in.put("param_in",rdsId);
Map out = execute(in);
if(!out.isEmpty())
return out.get("param_out").toString();
else
return null;
}
}
And call it like this
#Autowired DataSource ds;
MyStoredProcedure sp = new MyStoredProcedure(ds);
String i = sp.execute(1l);
The Oracle function used here just takes in a numeric parameter and returns a numeric paramter.
In my opinion this is one of the easiest approaches:
public class ServRepository {
private JdbcTemplate jdbcTemplate;
private SimpleJdbcCall functionGetServerErrors;
#Autowired
public void setDataSource(DataSource dataSource) {
this.jdbcTemplate = new JdbcTemplate(dataSource);
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.setResultsMapCaseInsensitive(true);
this.functionGetServerErrors = new SimpleJdbcCall(jdbcTemplate).withFunctionName("THIS_IS_YOUR_DB_FUNCTION_NAME").withSchemaName("OPTIONAL_SCHEMA_NAME");
}
public String callYourFunction(int parameterOne, int parameterTwo) {
SqlParameterSource in = new MapSqlParameterSource().addValue("DB_FUNCTION_INCOMING_PARAMETER_ONE", parameterOne).addValue("DB_FUNCTION_INCOMING_PARAMETER_TWO", parameterTwo);
return functionGetServerErrors.executeFunction(String.class, in);
}
}
Calling function using NamedParameterJdbcTemplate:
final String query = "select MY_FUNCTION(:arg1, :arg2, :arg3) from dual";
Map<String, Object> argMap = new HashMap<>();
argMap.put("arg1", "value1");
argMap.put("arg2", 2);
argMap.put("arg3", "value3");
final String result = new NamedParameterJdbcTemplate(dataSource)
.queryForObject(query, argMap, String.class);
Calling procedure using JdbcTemplate:
final String query = "call MY_PROCEDURE(?, ?, ?)";
final Object[] args = {"arg1", "arg2", "arg3"};
new JdbcTemplate(dataSource).execute(query, args, String.class);
Calling function using SimpleJdbcCall:
Map<String, Object> inParameters = new HashMap<>();
inParameters.put("arg1", 55); // arg1 value
inParameters.put("arg2", 20); // arg2 value
MapSqlParameterSource mapSqlParameterSource = new MapSqlParameterSource(inParameters);
BigDecimal result = new SimpleJdbcCall(dataSource)
.withCatalogName("MY_PACKAGE")
.withSchemaName("MY_SCHEMA")
.withFunctionName("MY_FUNCTION")
.executeFunction(BigDecimal.class, mapSqlParameterSource);
Calling procedure using SimpleJdbcCall:
new SimpleJdbcCall(dataSource)
.withCatalogName("MY_PACKAGE")
.withProcedureName("MY_PROCEDURE")
.execute("arg1", arg2);

Resources