CronTriggerFactoryBean doesn't work with new version - spring

I use quartz for scheduling my job (working on a maven project using Spring).
I updated quartz to the 2.3.0 version and I changed the CronTriggerBean and JobDetailBean in CronTriggerFactoryBean and JobDetailFactoryBean but with this configuration it doesn't instantiate the job at every request time like (cronexpression) it worked with the first configuration (CronTriggerBean).
Do I have to do some implementation?
quartz-context.xml
<bean id="jobImportFi01QuartzTrigger"
class="org.springframework.scheduling.quartz.CronTriggerFactoryBean">
<property name="group" value="xxx" />
<property name="jobDetail" ref="jobImportFi01Quartz" />
<property name="cronExpression" value="${jobImportFi01.cron.expression}" />
<property name="misfireInstructionName"
value="MISFIRE_INSTRUCTION_DO_NOTHING" />
</bean>
<bean id="jobImportFi01Quartz"
class="org.springframework.scheduling.quartz.JobDetailFactoryBean">
<property name="group" value="xxx" />
<property name="jobClass"
value="com.batch.job.timdataimport.quartz.ImportJobDetail" />
<property name="description" value="Fi01Import" />
<property name="jobDataAsMap">
<map>
<entry key="jobName" value="jobImportFi01" />
</map>
</property>
</bean>

This is what we use with Quartz 2.3.0 and it works OK:
<bean id="job1" class="org.quartz.impl.JobDetailImpl">
<property name="jobClass" value="com.quartzdesk.test.quartz.v2.TestJob"/>
<property name="group" value="quartzdesk-test"/>
<property name="name" value="Job1"/>
<property name="description"
value="Simple test job."/>
<property name="durability" value="true"/>
<property name="jobDataMap">
<bean class="org.quartz.JobDataMap">
<constructor-arg>
<util:map>
<entry key="jobKey01" value="value01"/>
</util:map>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="job1Trigger"
class="org.quartz.impl.triggers.CronTriggerImpl">
<property name="name" value="Job1Trigger"/>
<property name="group" value="quartzdesk-test"/>
<property name="jobName" value="Job1"/>
<property name="jobGroup" value="quartzdesk-test"/>
<property name="description" value="Cron trigger that fires every 15 minutes."/>
<property name="cronExpression" value="0 1/15 * * * ?"/>
<property name="startTime" value="2016-01-01"/>
<property name="calendarName" value="annualCalendar"/>
<property name="jobDataMap">
<bean class="org.quartz.JobDataMap">
<constructor-arg>
<util:map>
<entry key="jobTriggerKey01" value="value01"/>
</util:map>
</constructor-arg>
</bean>
</property>
</bean>

Related

Ignite persistence performance hit and metrics

I am trying out native persistence in Apache Ignite. My setup is currently local, single node cluster. I enabled it by adding this property in my data region
<property name="persistenceEnabled" value="true"/>
My full data region configuration is as follows
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="dr.local.input.trade"/>
<property name="persistenceEnabled" value="true"/>
<property name="metricsEnabled" value="true"/>
<property name="initialSize" value="#{200 * 1024 * 1024}"/>
<property name="maxSize" value="#{500 * 1024 * 1024}"/>
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
</bean>
Now the entries are being persisted, i.e if I shutdown Ignite and restart it then my data comes back inside the cache.
I am seeing significant performance hit. Around 35% increased put operation latency compared to non-persisted data region. I have referred to Ignite persistence tuning page. From that I have singled out below properties and their properties
Property
Value
WAL Modes
LOG_ONLY
walCompactionLevel
3
walCompationEnabled
true
writeThrottlingEnabled
true
checkpointBufferSize
512 mb
checkpointFrequency
5 minutes
Is there anything more that I can tune? Is the performance hit I mentioned above is typical or can it be lowered much more?
Also I tried seeing JMX metrics related to persistence using JConsole. I was checking metrics under org.apache.368239c8.ignitelocal."Persistent Store". All metrics mentioned under this are showing as 0. Data is surely persisted, I can see in Ignite work dir and WAL dir. Am I looking at wrong metrics? Please help.
Attaching entire Ignite config below.
<?xml version="1.0" encoding="UTF-8"?>
<!--
Generated by Chef for ignite1.intranet.com
-->
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xmlns="http://www.springframework.org/schema/beans"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
<property name="searchSystemEnvironment" value="true"/>
</bean>
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Set to true to enable distributed class loading for examples, default is false. -->
<property name="sslContextFactory">
<bean class="org.apache.ignite.ssl.SslContextFactory">
<property name="keyStoreFilePath" value="/home/sysSvcDevOps/ssl/ignite1.keystore.jks"/>
<property name="keyStorePassword" value="KeyStore443"/>
<property name="keyStoreType" value="jks"/>
<property name="trustStoreFilePath" value="/home/sysSvcDevOps/ssl/cacerts/java.cacerts.jks"/>
<property name="trustStorePassword" value="changeit"/>
<property name="trustStoreType" value="jks"/>
</bean>
</property>
<property name="igniteInstanceName" value=".dev"/>
<property name="consistentId" value="ignite1.dev"/>
<property name="workDirectory" value="/apps/Svc/dev/Ignite/IgniteData/persistentstore/work"/>
<property name="rebalanceThreadPoolSize" value="8"/>
<property name="publicThreadPoolSize" value="32"/>
<property name="systemThreadPoolSize" value="64"/>
<property name="queryThreadPoolSize" value="64"/>
<property name="failureDetectionTimeout" value="30000"/>
<property name="authenticationEnabled" value="true"/>
<property name="metricsUpdateFrequency" value="30000"/>
<property name="peerClassLoadingEnabled" value="false"/>
<property name="clientMode" value="false"/>
<!-- Enable task execution events for examples. -->
<property name="includeEventTypes">
<list>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_STARTED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_STOPPED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_NODES_LEFT"/>
</list>
</property>
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="walSegmentSize" value="1073741824"/>
<property name="walSegments" value="20"/>
<property name="maxWalArchiveSize" value="10737418240"/>
<property name="walCompactionEnabled" value="true"/>
<property name="walCompactionLevel" value="4"/>
<property name="checkpointFrequency" value="300000"/>
<property name="checkpointThreads" value="16"/>
<property name="checkpointReadLockTimeout" value="60000"/>
<property name="lockWaitTime" value="45000"/>
<property name="checkpointWriteOrder" value="RANDOM"/>
<property name="pageSize" value="4096"/>
<property name="writeThrottlingEnabled" value="true"/>
<!-- wal storage paths -->
<property name="walPath" value="/apps/Svc/dev/Ignite/IgniteData"/>
<property name="walArchivePath" value="/apps/Svc/dev/Ignite/IgniteDataArchive"/>
<property name="storagePath" value="/apps/Svc/dev/Ignite/IgniteData/archive"/>
<property name="dataRegionConfigurations">
<list>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="dr.dev.referencedata"/>
<property name="persistenceEnabled" value="true"/>
<property name="initialSize" value="1073741824"/>
<property name="maxSize" value="4294969673"/>
<property name="checkpointPageBufferSize" value="1073741824"/>
</bean>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="dr.dev.input"/>
<property name="persistenceEnabled" value="true"/>
<property name="metricsEnabled" value="true"/>
<property name="checkpointPageBufferSize" value="#{4 * 1024 * 1024 * 1024}"/>
<property name="initialSize" value="12884901888"/>
<property name="maxSize" value="81604378624"/>
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
</bean>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="dr.dev.input.exception"/>
<property name="persistenceEnabled" value="true"/>
<property name="metricsEnabled" value="true"/>
<property name="checkpointPageBufferSize" value="#{4 * 1024 * 1024 * 1024}"/>
<property name="initialSize" value="4294967296"/>
<property name="maxSize" value="21474836480"/>
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
</bean>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="dr.dev.output"/>
<property name="initialSize" value="1073741824"/>
<property name="persistenceEnabled" value="true"/>
<property name="metricsEnabled" value="true"/>
<property name="checkpointPageBufferSize" value="#{2 * 1024 * 1024 * 1024}"/>
<property name="maxSize" value="2147483648"/>
</bean>
</list>
</property>
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="default_region"/>
<property name="persistenceEnabled" value="true"/>
<property name="initialSize" value="268435456"/>
<property name="maxSize" value="268435456"/>
</bean>
</property>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi">
<property name="zkConnectionString" value="zk1.intranet.com:22001,zk2.intranet.com:22001"/>
<property name="zkRootPath" value="/ignite"/>
<property name="sessionTimeout" value="120000"/>
<property name="joinTimeout" value="10000"/>
</bean>
</property>
<property name="communicationSpi">
<bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
<property name="socketWriteTimeout" value="60000"/>
</bean>
</property>
<property name="cacheConfiguration">
<list>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="referenceDataCacheTemplate*"/>
<property name="cacheMode" value="REPLICATED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.referencedata"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="sqlIndexMaxInlineSize" value="203"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="inputMetadataCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.input"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="readFromBackup" value="false"/>
<property name="sqlIndexMaxInlineSize" value="211"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.ModifiedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="5"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="inputReconCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.input"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="readFromBackup" value="false"/>
<property name="sqlIndexMaxInlineSize" value="211"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="4"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="inputExceptionsCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.input.exception"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="readFromBackup" value="false"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="15"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="outputDataCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.output"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="sqlSchema" value=""/>
<property name="statisticsEnabled" value="true"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="450"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="reconAuditDataCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.referencedata"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="sqlSchema" value=""/>
<property name="statisticsEnabled" value="true"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="fileDataCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.input"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="queryParallelism" value="4"/>
<property name="eagerTtl" value="true"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="5"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="shortLivedReferenceDataTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.input.exception"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="managementEnabled" value="true"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="64"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="2"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
</list>
</property>
<property name="sqlSchemas">
<list>
<value>dataInput</value>
</list>
</property>
</bean>
</beans>
Speaking of the possible performance drop on writes.
In comparison to a pure in memory mode, the following disk interactions happen on updates:
in addition to a page modification in RAM, Ignite needs to provide consistency guarantees depending on your WAL mode, but unless it's not disabled, every update must be written to a WAL file. No data is flushed on disk yet; modification happens only in memory + WAL record is written.
Once you have too many dirty pages in RAM, or a timeout occurs, Ignite starts a checkpointing process flushing dirty pages on disk to the partition files on disk.
If WAL becomes too big, Ignite might perform segments rotation by copying them to a WAL archive to free up the space for new WAL updates.
As you can see, there are at least 3 major disk-related operations, meaning that it's crucial to have really fast disks for /wal, /walarchive and /db mounted folders. Again, it all depends on your use case, but in general it's strongly recommended to have the fastest available disks for WAL-related activity.
Possible performance drop on reads.
Again, it depends on a scenario, but if you can put all your data in memory (as it was before you turned persistence on), you will not see any performance differences.
It should be noted that after a restart, there will be no data in RAM at the start and Ignite must preload them first, i.e. to do a warm-up.
But, if you have more data than your configured data region size, a page replacement will take place rotating the data from and to disk. Worse scenario: say, you have a 10 GB RAM data region and 11 GB dataset. And you want to scan your data twice in alphabetical order.
There was no data in RAM yet; imagine that you did a restart. Ignite starts to read data from the disk and populate the data pages in memory. Imagine that after the letter W, our in-memory data set became full, and page rotation is required to load the remaining W-Z data. In that case, the oldest pages need to be evicted - meaning that, say, A-D chunk needs to go to the disk to load W-Z data instead. So, your in-memory data set is now something like W-Z, E-V. If we are going to make the same scan query, the whole data set needs to be replaced similarly.
Enable persistence metrics.
Check that you have the following property in your data region configuration, more details here.
<property name="metricsEnabled" value="true"/>
Also, there is no need for
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
It's only for non-persistent regions.

Not able to ref dataSource from other bean configuartion in Spring

<bean id="hikariConfig" class="com.zaxxer.hikari.HikariConfig">
<property name="poolName" value="${models.DS_POOL_NAME}" />
</property>
</bean>
<bean id="DBPlaceholder" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_OVERRIDE"/>
<property name="ignoreUnresolvablePlaceholders" value="true"/>
<property name="properties">
<bean class="org.apache.commons.configuration2.ConfigurationConverter" factory-method="getProperties">
<constructor-arg>
<bean id="DatabaseConfigurator" class="org.apache.commons.configuration2.DatabaseConfiguration">
<property name="dataSource" ref="dataSource" />
<property name="table" value="sample" />
<property name="keyColumn" value="PROPERTY" />
<property name="valueColumn" value="VALUE" />
<property name="configurationNameColumn" value="GROUP_NAME" />
<property name="configurationName" value="new" />
</bean>
</constructor-arg>
</bean>
</property>
</bean>
when we ref dataSource in DBPlaceholder bean then ${models.DS_POOL_NAME} will showing error because this value is coming from properties

How to custom Spring Batch DelimitedLineTokenizer

I have two file types to insert in database.
Format are : aa;bb;cc and aa;bb;cc;dd;ee
This is my FlatFileItemReader :
<bean name="readerContractToAddIntoPRV" class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="comments" value="#" />
<property name="linesToSkip" value="1" />
<property name="strict" value="false" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="fieldSetMapper">
<bean class="net.wl.batchs.fieldSetMapper.LineToCreateIntoPrvFieldSetMapper" />
</property>
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value=";"/>
<property name="names" value="aa,bb,cc,dd,ee" />
</bean>
</property>
</bean>
</property>
</bean>
I want a setup that works for both types of files.
For the moment, I have this :
org.springframework.batch.item.file.transform.IncorrectTokenCountException:
Incorrect number of tokens found in record: expected 3 actual 5
Do you have any ideas?
Thank you.
Edit : After correction :
<bean name="readerContractToAddIntoPRV" class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="comments" value="#" />
<property name="linesToSkip" value="1" />
<property name="strict" value="false" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper" p:lineTokenizer-ref="multilineFileTokenizer">
<property name="fieldSetMapper">
<bean class="net.wl.batchs.fieldSetMapper.LineToCreateIntoPrvFieldSetMapper" />
</property>
</bean>
</property>
</bean>
<bean id="multilineFileTokenizer" class="org.springframework.batch.item.file.transform.PatternMatchingCompositeLineTokenizer">
<property name="tokenizers">
<map>
<entry key="*;*;*;*;*" value-ref="NSCE_ICCID_MSISDN_LOGIN_PWD"/>
<entry key="*;*;*" value-ref="NSCE_ICCID_MSISDN"/>
<entry key="*" value-ref="headerDefault"/>
</map>
</property>
</bean>
<bean id="parentLineTokenizer" class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer" abstract="true">
<property name="delimiter" value=";"/>
</bean>
<bean id="NSCE_ICCID_MSISDN_LOGIN_PWD" parent="parentLineTokenizer">
<property name="names" value="nsce,iccid,msisdn,login,pwd" />
</bean>
<bean id="NSCE_ICCID_MSISDN" parent="parentLineTokenizer">
<property name="names" value="nsce,iccid,msisdn" />
</bean>
<bean id="headerDefault" parent="parentLineTokenizer">
<property name="names" value="nsce,iccid,msisdn" />
</bean>
The issue isn't your tokenizer. What you'll have to do is use the PatternMatchingCompositeLineMapper (http://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/file/mapping/PatternMatchingCompositeLineMapper.html). This will allow you to create a pattern for each line type you have and associate it with the appropriate LineTokenizer.
You can see this LineMapper in action in our samples here: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-samples/src/main/resources/jobs/multilineOrderInputTokenizers.xml

spring batch admin showing jobs as not launchable

I have a spring mvc webapp with spring batch built into it. I am having some issues getting my spring batch jobs to be launchable in the spring batch admin console. This is what I see when I go to the jobs page...
All of my jobs are coming up as launchable=false. I was wondering how I can fix this. I read some documentation about why this would be so and it said that I need to use a AutomaticJobRegistrar.
I tried this but it didn't change anything. I've put my spring batch job configuration below. Would appreciate it someone could tell me what is missing.
thanks
<beans profile="pre,prod">
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean"
parent="abstractCustDbJdbcDao">
<property name="transactionManager" ref="custDbTransactionManager" />
<property name="databaseType" value="db2" />
<property name="tablePrefix" value="REPMAN.BATCH_" />
</bean>
<bean id="jobExplorer"
class="org.springframework.batch.core.explore.support.JobExplorerFactoryBean"
parent="abstractCustDbJdbcDao" />
<bean class="org.springframework.batch.core.configuration.support.JobRegistryBeanPostProcessor">
<property name="jobRegistry" ref="jobRegistry" />
</bean>
<bean id="jobLoader" class="org.springframework.batch.core.configuration.support.AutomaticJobRegistrar">
<property name="applicationContextFactories">
<bean class="org.springframework.batch.core.configuration.support.ClasspathXmlApplicationContextsFactoryBean">
<property name="resources" value="classpath*:/META-INF/spring/jobs/*.xml" />
</bean>
</property>
<property name="jobLoader">
<bean class="org.springframework.batch.core.configuration.support.DefaultJobLoader">
<property name="jobRegistry" ref="jobRegistry" />
</bean>
</property>
</bean>
<bean id="jobRegistry"
class="org.springframework.batch.core.configuration.support.MapJobRegistry" />
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="jobDetails">
<list>
<ref bean="dailyTranCountJobDetail" />
<ref bean="bulletinBarMsgUpdateJobDetail" />
<ref bean="updateLovCacheJobDetail" />
</list>
</property>
<property name="triggers">
<list>
<ref bean="dailyTranCountCronTrigger" />
<ref bean="bulletinBarMsgUpdateCronTrigger" />
<ref bean="updateLovCacheCronTrigger" />
</list>
</property>
</bean>
<!-- scheduling properties -->
<util:properties id="batchProps" location="classpath:batch.properties" />
<context:property-placeholder properties-ref="batchProps" />
<!-- triggers -->
<bean id="dailyTranCountCronTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="dailyTranCountJobDetail" />
<property name="cronExpression" value="#{batchProps['cron.dailyTranCounts']}" />
</bean>
<bean id="bulletinBarMsgUpdateCronTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="bulletinBarMsgUpdateJobDetail" />
<property name="cronExpression" value="#{batchProps['cron.bulletinBarUpdateMsg']}" />
</bean>
<bean id="updateLovCacheCronTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="updateLovCacheJobDetail" />
<property name="cronExpression" value="#{batchProps['cron.updateLovCache']}" />
</bean>
<!-- job detail -->
<bean id="dailyTranCountJobDetail" class="org.springframework.scheduling.quartz.JobDetailBean">
<property name="jobClass" value="com.myer.reporting.batch.JobLauncherDetails" />
<property name="group" value="quartz-batch" />
<property name="jobDataAsMap">
<map>
<entry key="jobName" value="job-daily-tran-counts" />
<entry key="jobLocator" value-ref="jobRegistry" />
<entry key="jobLauncher" value-ref="jobLauncher" />
</map>
</property>
</bean>
<bean id="bulletinBarMsgUpdateJobDetail" class="org.springframework.scheduling.quartz.JobDetailBean">
<property name="jobClass" value="com.myer.reporting.batch.JobLauncherDetails" />
<property name="group" value="quartz-batch" />
<property name="jobDataAsMap">
<map>
<entry key="jobName" value="job-bulletin-bar-msg-update" />
<entry key="jobLocator" value-ref="jobRegistry" />
<entry key="jobLauncher" value-ref="jobLauncher" />
</map>
</property>
</bean>
<bean id="updateLovCacheJobDetail" class="org.springframework.scheduling.quartz.JobDetailBean">
<property name="jobClass" value="com.myer.reporting.batch.JobLauncherDetails" />
<property name="group" value="quartz-batch" />
<property name="jobDataAsMap">
<map>
<entry key="jobName" value="job-update-lov-cache" />
<entry key="jobLocator" value-ref="jobRegistry" />
<entry key="jobLauncher" value-ref="jobLauncher" />
</map>
</property>
</bean>
</beans>
There are a few things this could be:
Where is the XML file you reference above located? It needs to be the META-INF/spring/batch/jobs directory in your WAR file (that's where Spring Batch Admin will look).
Don't configure common components in your XML file. That includes the jobLauncher, jobRepository, jobExplorer, jobLoader, or jobRegistry. That being said, I don't see an actual job defined in your XML file. The XML file needs one of those ;)
You can read more about adding your own job definitions to Spring Batch Admin: http://docs.spring.io/spring-batch-admin/reference/jobs.html#Add_your_Own_Jobs_For_Launching_in_the_UI

Why Spring DataSourceTransactionManager suppress the concurrent number of ActiveMQ consumer

I got one strange problem.
when I config a DataSourceTransactionManager with spring xml, the concurrent consumers of ActiveMQ were suppressed whatever I change "maxConcurrentConsumers" property value. I have 5 queues, the total concurrent consumers of all 5 queue always kept at 8.
if I remove DataSourceTransactionManager bean, each queue's concurrent consumers reached the max number 5 declared in "maxConcurrentConsumers" .
The DataSourceTransactionManager work for dataSource, i cannot understand why it affected to ActiveMQ.
version:
Spring 3.2.5.RELEASE
ActiveMq 5.9.0
application.xml
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${jdbc.driver}" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.user}" />
<property name="password" value="${jdbc.password}" />
</bean>
<!-- once I add this, activemq total consumers always kept at 8 -->
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<!-- activemq consumer connection -->
<bean id="consumerConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory"
destroy-method="stop">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL">
<value>tcp://localhost:61616</value>
</property>
</bean>
</property>
<property name="maxConnections" value="5"></property>
</bean>
<!-- i have 5 queues -->
<bean id="test_1" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg index="0" value="test_1}" />
</bean>
<bean id="test_2" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg index="0" value="test_2}" />
</bean>
<bean id="test_3" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg index="0" value="test_3}" />
</bean>
<bean id="test_4" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg index="0" value="test_4}" />
</bean>
<bean id="test_5" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg index="0" value="test_5}" />
</bean>
<!-- consumer listener container -->
<bean id="testOneMessageListenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="consumerConnectionFactory"></property>
<property name="concurrentConsumers" value="1" />
<property name="maxConcurrentConsumers" value="5" />
<property name="destination" ref="test_1"></property>
<property name="messageListener" ref="demoBusinessListener"></property>
</bean>
<bean id="testTwoMessageListenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="consumerConnectionFactory"></property>
<property name="concurrentConsumers" value="1" />
<property name="maxConcurrentConsumers" value="5" />
<property name="destination" ref="test_2"></property>
<property name="messageListener" ref="demoBusinessListener"></property>
</bean>
<bean id="testThreeMessageListenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="consumerConnectionFactory"></property>
<property name="concurrentConsumers" value="1" />
<property name="maxConcurrentConsumers" value="5" />
<property name="destination" ref="test_3"></property>
<property name="messageListener" ref="demoBusinessListener"></property>
</bean>
<bean id="testFourMessageListenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="consumerConnectionFactory"></property>
<property name="concurrentConsumers" value="1" />
<property name="maxConcurrentConsumers" value="5" />
<property name="destination" ref="test_4"></property>
<property name="messageListener" ref="demoBusinessListener"></property>
</bean>
<bean id="testFiveMessageListenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="consumerConnectionFactory"></property>
<property name="concurrentConsumers" value="1" />
<property name="maxConcurrentConsumers" value="5" />
<property name="destination" ref="test_5"></property>
<property name="messageListener" ref="demoBusinessListener"></property>
</bean>
can someone help me!!!
after some test, i found a way to resolve this problem.
when i change the dataSource "maxActive" parameter to a number greater than sum of all mq listener maxConcurrentConsumers. it work fine.
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="${jdbc.driver}" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.user}" />
<property name="password" value="${jdbc.password}" />
<property name="maxActive" value="120" />
</bean>
it seems that the max number of activemq listener thread affected by Datasource maxActive parameter

Resources