How to partition a large file using spring batch - spring

I want to read a large text file using Spring Batch. I want to use Partition logic provided by Spring Batch. The partitioners that are already available does not solve my purpose. I want to read file through FlatFileReader using partitions.
Please help.

You can configure the ThreadPoolTaskExecutor and tweak the various properties according to your needs
<bean name="batchTaskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor" >
<property name="maxPoolSize" value="6"/>
<property name="corePoolSize" value="4"/>
<property name="threadNamePrefix" value="batchitem"/>
<property name="threadGroupName" value="BATCH"/>
</bean>
Then when you configure your tasklet inside your step which will do the actual chunk processing add the attribute for the configured taskExecutor. For example
<batch:tasklet task-executor="batchTaskExecutor" transaction-manager="transactionManager" allow-start-if-complete="true">

Related

How to define a PropertyPlaceholderConfigurer local to a specific bean?

I've been using org.springframework.beans.factory.config.PropertyPlaceholderConfigurer and in my experience ("citation needed" LOL) it sets the property values globally.
Is there a way to specify different PropertyPlaceholderConfigurer instances for different beans within the same application context xml?
My current code is similar to
<bean id="a" class="X">
<property name="foo" value="bar"/>
<property name="many" value="more"/>
</bean>
<bean id="b" class="X">
<property name="foo" value="baz"/>
<property name="number_of_properties" value="a zillion"/>
</bean>
I would like to do something like (pseudo-code below):
<bean id="a" class="X">
... parse the contents of "a.properties" here ...
</bean>
<bean id="b" class="X">
... parse the contents of "b.properties" here ...
</bean>
The above is non-working pseudo code to illustrate the concept; the point being, I want a different properties file to feed each bean.
WHY?
I want to have those specific properties in separate properties file and not in XML.
I think the following link can br helpful to you.
Reference Link
where #Value("${my.property.name}") annotation is used to bind the property file to a variable of type Properties which will reside in your bean class where you intend to use that properties file.
and you can define multiplte proprtiesplaceholder as below:
<bean id="myProperties"
class="org.springframework.beans.factory.config.PropertiesFactoryBean">
<property name="locations">
<list>
<value>classpath*:my.properties</value>
</list>
</property>
</bean>
and use the id as reference in your bean variable to initialize properties file to the bean.
And it will be handy to include with placeholder bean.
Kindly refer Importance of Unresolvable Placeholder link for detailed info regarding its usage.
Hope this was helpful.

How to avoid hardcoded names in DelimitedLineTokenizer names?

I am using DelimitedLineTokenizer to read from a txt file using FlatFileItemReader. However, is there a way to avoid hardcoding the "names" property of the fields ? Instead can we use any bean ?
<bean id="employeeReader" class="org.springframework.batch.item.file.FlatFileItemReader"
scope="step">
<property name="resource" value="#{jobParameters['input.file.name']}" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="names" value="empId,empName,empAge" />
</bean>
</property>
<property name="fieldSetMapper">
<bean
class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper">
<property name="targetType" value="com.example.Employee" />
</bean>
</property>
</bean>
</property>
</bean>
Currently there is not because the result of the LineTokenizer's work is a FieldSet. A FieldSet is like a ResultSet for a file. So just like in a ResultSet, we need to reference each "column" by something which in this case is the name. The LineTokenizer has no insight into where the data is ending up (what the object looks like) so we have no way to introspect it. If you wanted to take a more dynamic approach, you'd want to implement your own LineMapper that combines the two functions together allowing for that type of introspection.
As always, pull requests are appreciated!
In addition to Michael Minella's answer, here's what you can do :
You can use a value such as #{jobParameter['key']}, #{jobExecutionContext['key']} or #{stepExecutionContext['key']} in the <property name="names"> tag.
This means that you can have a step or a listener which does your business logic and save the result at any time in the ExecutionContext :
stepExecution.getJobExecution().getExecutionContext().put(key, value);
Keep in mind though, that the field names of a DelimitedLineTokenizer needs a String (well not really, but close enough), not a bean.

Ehcache statistics

I would like to see all statistics for the Ehcache when I have running server.
In the documentation I have found objects such as "StatisticsGateway" and "SampledCache". I'm using ehcache 2.9.
By using StatisticsGateway gets incomplete statistics. When using the SampledCache object I get more statistics, but nowhere is described in some way to retrieve the object.
For example, getting the StatisticsGateway object is as follow:
Cache cache = cacheManager.getCache("name");
StatisticsGateway statistic = cache.getStatistics();
statistic.cacheHitCount() etc.
How to get the SampledCache object?
Thanks in advance!
Late answer :) it may help some one else.
You can use jconsole.exe from your java/bin directory. The Jconsole can show you the statistics.
You may need to add the JMX support to see the statistics in Jconsole
<!-- JMX for Ehcache -->
<bean id="managementService" class="net.sf.ehcache.management.ManagementService"
init-method="init" destroy-method="dispose">
<constructor-arg ref="ehcache" />
<constructor-arg ref="mbeanServer" />
<constructor-arg index="2" value="true" />
<constructor-arg index="3" value="true" />
<constructor-arg index="4" value="true" />
<constructor-arg index="5" value="true" />
</bean>
SampleCache acts as a decorator object. Basically you create an instance of SampledCache and pass a Cache instance as a backing cache. The backing Cache is the cache for which you need stats, in your case the cache instance. Something like
SampledCache sampledCache = new SampledCache(cache);
You can call methods on sampledCache to get desired stats. Created a simple example here http://www.ashishpaliwal.com/blog/2015/01/using-ehcache-sampledcache/

Spring 3.2 and Quartz Scheduler

I have an Spring application to maintain that has Quartz Scheduler configured in an applicationContex-quartz.xml file. A SchedulerFactoryBean is defined with a list of 4 triggers.
One of the triggers i have to modify is a CronTrigger with a simple schedule where it's ran the 15th of the month at 3AM. I need to take into account of some special holidays. I'm aware i can use the Calendar class. My question really is how do I configure it in the xml file? I only want one of the triggers to use it.
Thanks
If those special holidays can be expressed into a single cron expression you shouldn't have problems.
If those special holidays can't be expressed into a single cron expression and you don't want to modify the following:
<bean id="quartzScheduler" class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<list>
<ref bean="cronTrigger1" />
<ref bean="cronTrigger2" />
<ref bean="cronTrigger3" />
<ref bean="cronTrigger4" />
</list>
</property>
I think you can't do what you want because in a CronTriggerBean:
<bean id="cronTrigger1" ="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="quartzSchedulerSpecialHolidays" />
<property name="cronExpression" value="abracadabra" />
</bean>
you can associate only one jobDetail to one cronExpression.

Spring data jpa add filter/interceptor

I have used hibernate before, and have successfully added a filter that would intercept saves and entities who implemented a certain interface would have something logged.
Is it possible to do something similar in the new Spring Data, I have just started out using it.
Yes you can always add filters/interceptors with spring data
Following is an example:
<bean id="customizableTraceInterceptor" class="
org.springframework.aop.interceptor.CustomizableTraceInterceptor">
<property name="enterMessage" value="Entering $[methodName]($[arguments])"/>
<property name="exitMessage" value="Leaving $[methodName](): $[returnValue]"/>
</bean>
<aop:config>
<aop:advisor advice-ref="customizableTraceInterceptor"
pointcut="execution(public * org.springframework.data.jpa.repository.JpaRepository+.*(..))"/>
</aop:config>
reference: http://static.springsource.org/spring-data/data-jpa/docs/current/reference/html/

Resources