How to use ORALCE SQL SEQUENCE in spring batch writer.? - spring

can someone please let me know how to use oracle sequence in spring batch writer?.
I tried using custseq.nextval in the insert statement, but its failing.
<bean id="testSSRReader"
class="org.springframework.batch.item.database.JdbcCursorItemReader">
<property name="dataSource" ref="bconnectedDataSource" />
<property name="sql"
value="select CUST_USA_ID , CUST_FIRST_NAME , CUST_LAST_NAME from BL_CUSTOMER fetch first 100 rows only" />
<property name="rowMapper">
<bean class="com.macys.batch.rowmapper.TestSSRRowMapper" />
</property>
</bean>
<bean id="testSSRProcessor" class="com.macys.batch.processor.TestSSRProcessor" />
<bean id="testSSRWriter"
class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="dataSource" ref="ocDataSource" />
<property name="sql">
<value>
<![CDATA[
insert into TESTTABLESSR(CUSTOMER_ID,CUSTOMER_NAME,CITY)
VALUES (custseq.nextval,:firstName,:lastName)
]]>
</value>
</property>
<property name="itemSqlParameterSourceProvider" ref="itemSqlParameterSourceProvider" />
</bean>

Related

Spring batch xml repositoryitemwriter schema placeholder form properties

Need to get the schema name from properties file for spring batch application.
Where the schema name is different for dev and prod for MSSQL database.
Job configuration in xml as below
<bean id="dataItemWriter" class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="assertUpdates" value="true" />
<property name="itemPreparedStatementSetter">
<bean class="org.test.batch.model.ItemStatementMapper" />
</property>
<property name="sql" >
<value>
<![CDATA[
INSERT INTO dbo.EMPLOYEE
(PROJECT_NAME
,APP_NAME
,EMPLOYEE_NAME)
values (?,?,?)
]]>
</value>
</property>
<property name="dataSource" ref="dataDataSource" />
</bean>
The schema name "dbo" should be retrieved form proprieties file so that DEV and PROD this can be changed in configuration
I don't see the need to put the value in a CDATA block, there are no special xml characters in your query. Here is an example: https://github.com/spring-projects/spring-batch/blob/8762e3411557aaf887867f8d8594b01127538cb1/spring-batch-core/src/test/resources/org/springframework/batch/core/resource/ListPreparedStatementSetterTests-context.xml#L25. So in your case, it should be something like:
<bean id="dataItemWriter" class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="assertUpdates" value="true" />
<property name="itemPreparedStatementSetter">
<bean class="org.test.batch.model.ItemStatementMapper" />
</property>
<property name="sql" >
<value>
INSERT INTO dbo.EMPLOYEE (PROJECT_NAME ,APP_NAME ,EMPLOYEE_NAME) values (?,?,?)
</value>
</property>
<property name="dataSource" ref="dataDataSource" />
</bean>

Spring batch ItemProcessor order of processing the items

Here is my spring configuration file.
<batch:job id="empTxnJob">
<batch:step id="stepOne">
<batch:partition partitioner="partitioner" step="worker" handler="partitionHandler" />
</batch:step>
</batch:job>
<bean id="asyncTaskExecutor" class="org.springframework.core.task.SimpleAsyncTaskExecutor" />
<bean id="partitionHandler" class="org.springframework.batch.core.partition.support.TaskExecutorPartitionHandler" scope="step">
<property name="taskExecutor" ref="asyncTaskExecutor" />
<property name="step" ref="worker" />
<property name="gridSize" value="${batch.gridsize}" />
</bean>
<bean id="partitioner" class="com.spring.mybatch.EmpTxnRangePartitioner">
<property name="empTxnDAO" ref="empTxnDAO" />
</bean>
<batch:step id="worker">
<batch:tasklet transaction-manager="transactionManager">
<batch:chunk reader="databaseReader" writer="databaseWriter" commit-interval="25" processor="itemProcessor">
</batch:chunk>
</batch:tasklet>
</batch:step>
<bean name="databaseReader" class="org.springframework.batch.item.database.JdbcCursorItemReader" scope="step">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value>
<![CDATA[
select *
from
emp_txn
where
emp_txn_id >= #{stepExecutionContext['minValue']}
and
emp_txn_id <= #{stepExecutionContext['maxValue']}
]]>
</value>
</property>
<property name="rowMapper">
<bean class="com.spring.mybatch.EmpTxnRowMapper" />
</property>
<property name="verifyCursorPosition" value="false" />
</bean>
<bean id="databaseWriter" class="org.springframework.batch.item.database.JdbcBatchItemWriter">
<property name="dataSource" ref="dataSource" />
<property name="sql">
<value><![CDATA[update emp_txn set txn_status=:txnStatus where emp_txn_id=:empTxnId]]></value>
</property>
<property name="itemSqlParameterSourceProvider">
<bean class="org.springframework.batch.item.database.BeanPropertyItemSqlParameterSourceProvider" />
</property>
</bean>
<bean id="itemProcessor" class="org.springframework.batch.item.support.CompositeItemProcessor" scope="step">
<property name="delegates">
<list>
<ref bean="processor1" />
<ref bean="processor2" />
</list>
</property>
</bean>
My custom range partitioner will split it based on primary key of emp_txn records.
Assume that an emp(primary key - emp_id) can have multiple emp_txn(primary key - emp_txn_id) to be processed. With my current setup, Its possible in ItemProcessor(either processor1 or processor 2) that 2 threads can process the emp_txn for same employee(i.e., for same emp_id).
Unfortunately the back end logic that process(in processor2) the emp_txn is not capable of handling transactions for same emp in parallel. Is there a way in spring batch to control the order of such processing?
With the use case you are describing, I think you're partitioning by the wrong thing. I'd partition by emp instead of emp-txn. That would group the emp-txns and you could order them there. It would also prevent the risk of emp-txns from being processed out of order based on which thread gets to it first.
To answer your direct question, no. There is no way to order items going through processors in separate threads. Once you break the step up into partitioning, each partition works independently.

How to custom Spring Batch DelimitedLineTokenizer

I have two file types to insert in database.
Format are : aa;bb;cc and aa;bb;cc;dd;ee
This is my FlatFileItemReader :
<bean name="readerContractToAddIntoPRV" class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="comments" value="#" />
<property name="linesToSkip" value="1" />
<property name="strict" value="false" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="fieldSetMapper">
<bean class="net.wl.batchs.fieldSetMapper.LineToCreateIntoPrvFieldSetMapper" />
</property>
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value=";"/>
<property name="names" value="aa,bb,cc,dd,ee" />
</bean>
</property>
</bean>
</property>
</bean>
I want a setup that works for both types of files.
For the moment, I have this :
org.springframework.batch.item.file.transform.IncorrectTokenCountException:
Incorrect number of tokens found in record: expected 3 actual 5
Do you have any ideas?
Thank you.
Edit : After correction :
<bean name="readerContractToAddIntoPRV" class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="comments" value="#" />
<property name="linesToSkip" value="1" />
<property name="strict" value="false" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper" p:lineTokenizer-ref="multilineFileTokenizer">
<property name="fieldSetMapper">
<bean class="net.wl.batchs.fieldSetMapper.LineToCreateIntoPrvFieldSetMapper" />
</property>
</bean>
</property>
</bean>
<bean id="multilineFileTokenizer" class="org.springframework.batch.item.file.transform.PatternMatchingCompositeLineTokenizer">
<property name="tokenizers">
<map>
<entry key="*;*;*;*;*" value-ref="NSCE_ICCID_MSISDN_LOGIN_PWD"/>
<entry key="*;*;*" value-ref="NSCE_ICCID_MSISDN"/>
<entry key="*" value-ref="headerDefault"/>
</map>
</property>
</bean>
<bean id="parentLineTokenizer" class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer" abstract="true">
<property name="delimiter" value=";"/>
</bean>
<bean id="NSCE_ICCID_MSISDN_LOGIN_PWD" parent="parentLineTokenizer">
<property name="names" value="nsce,iccid,msisdn,login,pwd" />
</bean>
<bean id="NSCE_ICCID_MSISDN" parent="parentLineTokenizer">
<property name="names" value="nsce,iccid,msisdn" />
</bean>
<bean id="headerDefault" parent="parentLineTokenizer">
<property name="names" value="nsce,iccid,msisdn" />
</bean>
The issue isn't your tokenizer. What you'll have to do is use the PatternMatchingCompositeLineMapper (http://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/file/mapping/PatternMatchingCompositeLineMapper.html). This will allow you to create a pattern for each line type you have and associate it with the appropriate LineTokenizer.
You can see this LineMapper in action in our samples here: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-samples/src/main/resources/jobs/multilineOrderInputTokenizers.xml

Spring + MongoDB + Quartz = OptimisticLockingFailureException

on my Spring app I have a job with following setup:
<!-- Spring Quartz Job -->
<bean id="runMeJob" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
<property name="targetObject" ref="com.pixolut.mrb.ob.ss.SsGateway" />
<property name="targetMethod" value="scheduler" />
</bean>
<bean id="simpleTrigger" class="org.springframework.scheduling.quartz.SimpleTriggerBean">
<property name="jobDetail" ref="runMeJob" />
<property name="repeatInterval" value="5000" />
<property name="startDelay" value="1000" />
</bean>
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="jobDetails">
<list>
<ref bean="runMeJob" />
</list>
</property>
<property name="triggers">
<list>
<ref bean="simpleTrigger" />
</list>
</property>
</bean>
The problem is that when I try to save an object by using the MongoTemplate save function I get OptimisticLockingFailureException
Is it because Quarts doen't support Mongo?
This issue was caused from a null property on my model.

Spring Batch - Issue with PageSize in JdbcPagingItemReader

Hi We are working on a spring batch, which processes all the SKUs in SKU table and send a request to inventory system to get the inventory details. To send to invetory details we need to send 100 SKI ids at a time so we have set the pageSize as 100.
in the reader log:
we see
SELECT * FROM (SELECT S_ID ,S_PRNT_PRD,SQ, ROWNUM as TMP_ROW_NUM FROM
XXX_SKU WHERE SQ>=:min and SQ <=:max ORDER BY SQ ASC) WHERE ROWNUM <=
100]
But we observe in the WRITER that is for certain time 100 SKU are sent and for certain requests only 1 SKU is sent.
public void write(List<? extends XXXPagingBean> pItems) throws XXXSkipItemException {
if (mLogger.isLoggingDebug()) {
mLogger.logDebug("XXXInventoryServiceWriter.write() method STARTING, ItemsList size:{0}" +pItems.size());
}
....
....
}
pageSize and commitInterval is set to 100 (are these suppose to be same?)
sortKey (SEQ_ID) should be same a column use in partitiner?
Bean configurations in XML:
<!-- InventoryService Writer configuration -->
<bean id="inventoryGridService" class="atg.nucleus.spring.NucleusResolverUtil" factory-method="resolveName">
<constructor-arg value="/com/XXX/gigaspaces/inventorygrid/service/InventoryGridService" />
</bean>
<bean id="inventoryWriter" class="com.XXX.batch.integrations.XXXXXX.XXXXInventoryServiceWriter" scope="step">
<property name="jdbcTemplate" ref="batchDsTemplate"></property>
<property name="inventoryGridService" ref="inventoryGridService" />
</bean>
<bean id="pagingReader" class="org.springframework.batch.item.database.JdbcPagingItemReader" xmlns="http://www.springframework.org/schema/beans" scope="step">
<property name="dataSource" ref="dataSource" />
<property name="queryProvider">
<bean id=" productQueryProvider" class="org.springframework.batch.item.database.support.SqlPagingQueryProviderFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="selectClause" value="select S_ID ,S_PRNT_PRD" />
<property name="fromClause" value="from XXX_SKU" />
<property name="sortKey" value="SEQ_ID" />
<property name="whereClause" value="SEQ_ID>=:min and SEQ_ID <=:max"></property>
</bean>
</property>
<property name="parameterValues">
<map>
<entry key="min" value="#{stepExecutionContext[minValue]}"></entry>
<entry key="max" value="#{stepExecutionContext[maxValue]}"></entry>
</map>
</property>
<property name="pageSize" value="100" />
<property name="rowMapper">
<bean class="com.XXX.batch.integrations.endeca.XXXPagingRowMapper"></bean>
</property>
</bean>
Please suggest.
Remove your whereClause from the productQueryProvider bean definition and get rid of your parameterValues and it should work. The PagingQueryProvider takes care of paging automatically for you. There's no need to do that manually yourself.

Resources