I've created an partitioned persistent Ignite cache with SQL DDL. By default the backup setting is "0", now I want to add backup with size "2" per node (we have 3 nodes). In the ignite server nodes I edited the configuration xml file to look like this:
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
.
.
</bean>
.
.
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="ourCache"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="2"/>
<property name="groupName" value="grp" />
<property name="partitionLossPolicy" value="READ_ONLY_SAFE"/>
</bean>
</list>
</property>
Unfortunately in the ignite visior when I invoke to see the configuration I see that the "Affinity Backups" is still 0. I've checked the "cache -a" statistics where you can usually see the backup records per node - still 0.
How can I add backups for existing caches? The baseline is used in production and there are >100M records already.
The only way of changing backup factor or most of the other cache configuration properties is to create new cache, copy over all data to it from the old cache (if needed) and destroy the old cache.
Related
I am working on TIBCO BusinessWorks 5.3. So we normally provide query timeout in the SQL Direct/JDBC Query activity. But for Ignite cache that timeout does no seem to work.
This is installed on a Linux server. I tried to add the setTimeout property in the config xml under cacheConfiguration property node.
I have tried with 2 different configurations
1.
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!--some properties-->
<property name="setTimeout" value="60" />
</bean>
</list>
</property>
2.
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!--some properties-->
</bean>
<bean class="org.apache.ignite.cache.query.SqlFieldsQuery">
<property name="setTimeout" value="60" />
</bean>
</list>
</property>
Error message is thrown as below-
org.springframework.beans.NotWritablePropertyException: Invalid
property setTimeout of bean class
[org.apache.ignite.configuration.CacheConfiguration]: Bean property
setTimeout is not writable or has an invalid setter method.
Currently you may use SqlQuery/SqlFieldsQuery API to set individual timeouts for queries: https://apacheignite-sql.readme.io/docs/query-cancellation
It's a known issue that there is no option to configure default timeout for a query, here is link for your reference (there is a PR that was active month ago): https://issues.apache.org/jira/browse/IGNITE-7285
JDBC query timeout is implemented, but not yet documented/released: https://issues.apache.org/jira/browse/IGNITE-5438
Part of my Spring (4.3.23) / Hibernate (5.0.12) application "A" uses a data source that is exposed be a second application "B" (the data source is an in-memory database). Both A and B are deployed in Tomcat and I don't control start order. Once both A and B have started both behave as expected, however if A starts before B an error is thrown during initialisation when Hibernate tries to query the data source:
org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator - HHH000342: Could not obtain connection to query metadata : Cannot create PoolableConnectionFactory (Connection is broken: "java.net.ConnectException: Connection refused: 127.0.1.1:5521" [90067-199])
Is there any way I can suppress this error, delay this part of initialisation, or tell Hibernate that the data source may not be immediately available?
Here are the relevant parts of my configuration:
<bean id="memDataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:comp/env/jdbc/memdb" />
</bean>
<bean
id="memSessionFactory"
class="org.springframework.orm.hibernate5.LocalSessionFactoryBean">
<property name="dataSource" ref="memDataSource" />
<property name="packagesToScan" value="com.company" />
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.H2Dialect</prop>
</props>
</property>
</bean>
<bean id="memTransactionManager"class="org.springframework.orm.hibernate5.HibernateTransactionManager">
<property name="sessionFactory" ref="memSessionFactory" />
<qualifier value="memTransactions"/>
</bean>
<tx:annotation-driven transaction-manager="memTransactionManager" />
I would wind up scripting the deployment overall. Bring up tomcat with application A in the webapps directory, and periodically ping for successful deployment of application A before copying/moving over application B into the webapps directory for deployment.
This entire solution would likely work best if you used a staging directory to move your war files onto that server, and then let the script clear the webapps directory and move/copy the new war(s) into webapps for fresh deployment.
SideNote: I've found that deployment of applications seems to be in a constant order, but I'm not sure as it is alphabetical, oldest first, or something else.
I am looking to load a number of values into my server configuration.xml from a properties file.
However, on adding the placeholders I start getting, property cannot be resolved errors. Preferably I would like to use Jasypt, which has loaded up fine, but has the same issue, property cannot be resolved.
Sample placeholder:
<bean id="placeholderConfig" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="location" value="ignite.properties"/>
</bean>
Sample Bean:
<property name="sslContextFactory">
<bean class="org.apache.ignite.ssl.SslContextFactory">
<property name="keyStoreFilePath" value="ignite.jks"/>
<property name="keyStorePassword" value="${some.password}"/>
<property name="keyStoreType" value="JKS"/>
<property name="protocol" value="TLSv1.2"/>
<property name="trustManagers">
<bean class="org.apache.ignite.ssl.SslContextFactory" factory-method="getDisabledTrustManager"/>
</property>
</bean>
</property>
Is it possible, is there a library I should have added, it otherwise runs fine if I do not use properties.
The configuration is parsed by Spring and Ignite has nothing to do with it. I believe there are two possible reasons:
Incorrect file path. Note that if the file is on the classpath, the location should be classpath:ignite.properties.
Incorrect property name.
I have a problem with load-balancing Quartz jobs. When there are two instances running, only one of them handles all jobs. The second one is idling. When I terminate first instance the second one starts to handle jobs until first instance is started again.
I expected that there is kind of load-balancing which dispatches jobs between those two instances.
I am using Quartz version 1.8.6.
This is the part of applicationContext.xml:
<bean id="firstJobDetail"
class="org.springframework.scheduling.quartz.JobDetailFactoryBean">
<property name="jobClass" value="com.mycompany.quartz.job.FirstJob" />
<property name="durability" value="true" />
</bean>
<bean id="firstTrigger" class="com.mycompany.quartz.PersistableCronTriggerFactoryBean">
<property name="jobDetail" ref="firstJobDetail" />
<property name="cronExpression" value="0/10 * * * * ?" />
</bean>
<bean id="quartzScheduler"
class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="configLocation" value="classpath:META-INF/quartz.properties" />
<property name="dataSource" ref="dataSource" />
<property name="transactionManager" ref="transactionManager" />
<!-- This name is persisted as SCHED_NAME in db. for local testing could
change to unique name to avoid collision with dev server -->
<property name="schedulerName" value="quartzScheduler" />
<!-- Will update database cron triggers to what is in this jobs file on
each deploy. Replaces all previous trigger and job data that was in the database.
YMMV -->
<property name="overwriteExistingJobs" value="true" />
<property name="autoStartup" value="true" />
<property name="applicationContextSchedulerContextKey" value="applicationContext" />
<property name="jobFactory">
<bean class="com.mycompany.quartz.AutowiringSpringBeanJobFactory" />
</property>
<!-- NOTE: Must add both the jobDetail and trigger to the scheduler! -->
<property name="jobDetails">
<list>
<ref bean="firstJobDetail" />
</list>
</property>
<property name="triggers">
<list>
<ref bean="firstTrigger" />
</list>
</property>
</bean>
And this is the quartz.properties file:
# Spring uses LocalDataSourceJobStore extension of JobStoreCMT
org.quartz.jobStore.useProperties=true
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.jobStore.isClustered = true
# Change this to match your DB vendor
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
# Needed to manage cluster instances
org.quartz.scheduler.instanceId=AUTO
org.quartz.scheduler.instanceName=MY_JOB_SCHEDULER
org.quartz.scheduler.rmi.export = false
org.quartz.scheduler.rmi.proxy = false
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
According to the documentation, you need to set org.quartz.jobStore.clusterCheckinInterval .
I Know it is too late to answer your question. But recently I come across the same issue and found your post in online.
Whenever there are two instances running, either one of them tries to put lock on the job. Since your trigger time is every 10 secs which is very less (0/10 * * * * ?), the instance 2 is unable to put lock on the job.
Increase the time period to one minute(* 0/1 * * * ?). You can see, both the instance will process the job.
Please let me know if you face any other issues.
Documentation says:
Load-balancing occurs automatically, with each node of the cluster
firing jobs as quickly as it can. [...] Only one node will fire the
job for each firing. [...] It won’t necessarily be the same node each
time - it will more or less be random which node runs it. The load
balancing mechanism is near-random for busy schedulers (lots of
triggers) but favors the same node for non-busy (e.g. few triggers)
schedulers.
My project has several processes that are been executed each day. The problem I found is that after a job execution, when I execute again the same process (with different job parameters, of course) I see Spring batch generates a new Job instance BUT the variables values remain in memory for the new execution.
How is it possible? New instances don't create new Java instances? Problem configuration?
My JobLoader configuration:
<bean id="jobLoader" class="org.springframework.batch.core.configuration.support.AutomaticJobRegistrar">
<property name="applicationContextFactories">
<bean class="org.springframework.batch.core.configuration.support.ClasspathXmlApplicationContextsFactoryBean">
<property name="resources" value="classpath*:/META-INF/spring/batch/jobs/*.xml" />
</bean>
</property>
<property name="jobLoader">
<bean class="org.springframework.batch.core.configuration.support.DefaultJobLoader">
<property name="jobRegistry" ref="jobRegistry" />
</bean>
</property>
</bean>
Thanks,