Set heartbeatintervalseconds using spring xml - spring

I am using spring-data-Cassandra v1.3.2 in my project.
Is it possible to set heartbeatintervalseconds using spring configuration XML file.
Getting 4 lines of hearbeat DEBUG logs every 30 seconds in my application logs and i am not sure how to avoid them.

Unfortunately, no.
After reviewing the SD Cassandra CassandraCqlClusterParser class, it is apparent that you can specify both "local" and "remote" connection pooling options, however, neither handler handles all the Cassandra Java driver "pooling options" appropriately (such as heartbeatIntervalSeconds).
It appears several other options are missing as well: idleTimeoutSeconds, initializationExecutor, poolTimeoutMillis, and protocolVersion.
Equally unfortunate is it appears the SD Cassandra PoolOptionsFactoryBean does not support these "pooling options" either.
However, not all is lost.
While your SD Cassandra application may resolve it's configuration primarily from XML, it does not preclude you from using a combination of Java config and XML.
For instance, you could use a Spring Java config class to configure your cluster and express your PoolingOptions in Java config...
#Configuration
#ImportResource("/class/path/to/cassandra/config.xml")
class CassandraConfig {
#Bean
PoolingOptions poolingOptions() {
PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions.setHeartbeatIntervalSeconds(30);
poolingOptions.setIdleTimeoutSeconds(300);
poolingOptions.setMaxConnectionsPerHost(50);
poolingOptions.set...
return poolingOptions;
}
#Bean
CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean()
cluster.setContactPoints("..");
cluster.setPort(1234);
cluster.setPoolingOptions(poolingOptions());
cluster.set...
return cluster;
}
}
Hope this helps.
As an FYI, you may want to upgrade to the "current" Spring Data Cassandra version, 1.4.1.RELEASE.

Sadly, but the answer is no. It's not possible to configure the heartbeat interval using XML configuration. Only the following local/remote properties can be configured in PoolingOptions:
min-simultaneous-requests
max-simultaneous-requests
core-connections
max-connections
If you switch to Java-based configuration, then you're able to configure PoolingOptions by extending AbstractClusterConfiguration:
#Configuration
public class MyConfig extends AbstractClusterConfiguration {
#Override
protected PoolingOptions getPoolingOptions() {
PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions.setHeartbeatIntervalSeconds(10);
return poolingOptions
}
}

Related

Reload consumer properties with spring kafka

We have developed live reloading of config properties with spring boot applications. I have a spring-kafka consumer and I wanted to leverage the live reloading where if I change the consumer property I should be able to start the container without rebooting the application. I used:
KafkaListenerEndpointRegistry.stop()
KafkaListenerEndpointRegistry.start()
I thought the above actually creates a new container but that is not the case. So I wanted to find out if I have to start a container with new config properties how do I do that
#Bean
#ConfigurationProperties(prefix = "container.config.properties")
#ConditionalOnMissingBean
#RefreshScope
ContainerConfigProperties containerConfigProperties() {
return new ContainerConfigProperties();
}
#Bean
#ConditionalOnMissingBean
#ConditionalOnBean(value = {ContainerConfigProperties.class})
#RefreshScope
<K, V> ConcurrentKafkaListenerContainerFactory<K, ValueDeserializerContainer<V>> kafkaListenerContainerFactory(final ConsumerFactory<K, ValueDeserializerContainer<V>> consumerFactory,
final ContainerConfigProperties containerConfigProperties,
final Optional<IAMIdentity> iamIdentity) {
val factory = new ConcurrentKafkaListenerContainerFactory<K, ValueDeserializerContainer<V>>();
factory.setBatchListener(true);
factory.setBatchErrorHandler(new SeekToCurrentBatchErrorHandler());
factory.setConsumerFactory(consumerFactory);
factory.getContainerProperties().setAckMode(containerConfigProperties.getAckMode());
factory.setConcurrency(containerConfigProperties.getConcurrency());
factory.getContainerProperties().setConsumerRebalanceListener(simpleConsumerRebalanceListener());
// update kafka consumer properties. Default is taken from the config file
iamIdentity.ifPresent(identity -> consumerFactory.updateConfigs(addIAMIdentity(identity)));
log.info("kafkaListenerContainerFactory");
return factory;
}
Exactly which properties are you changing? The child containers are indeed recreated when stopping/starting the parent container so any ContainerProperties changes will be picked up.
If you are talking about kafka consumer properties, you either need to reconfigure the consumer factory, or set the changed properties via the ContainerProperties.kafkaConsumerProperties to override the consumer factory settings.
EDIT
Something like this might work:
#Bean
#RefreshScope
Object containerReconfigurer(KafkaListenerEndpointRegistry registry) {
registry.getListenerContainers().forEach(container -> {
container.stop();
// reconfigure container
container.start();
});
return null;
}

Cannot connect to redis using spring and lettuce

I am struggling to find what could be wring here; need help.
I am using spring-data-redis 2.4.1
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration()
redisStandaloneConfiguration.setHostname(hostname)
redisStandaloneConfiguration.setPort(6379)
redisStandaloneConfiguration.setPassword("password")
I then create lettuceClientConfigurationBuilder and specify clientName
I then use lettuceClientConfiguration and redisStandaloneConfiguration to create ClientConnectionFactory.
However, when we call getConnection() on the connection Factory, we get
WrongPass Invalid username-password pair
The same set of username-password works with Redis-CLI on cmd prompt.
Is there is something wrong in the way I am using in my java application?
Any pointer/hint towards solving this would be greatly appreciated.
Spring Boot configures LettuceConnectionFactory for you, you can specify the connection params on the application.properties file.
spring.redis.database=0
spring.redis.host=localhost
spring.redis.port=6379
spring.redis.password=yourPassword
spring.redis.timeout=60000
If you wanna do it programmatically, set the spring.redis.password in application.properties and try this:
#Configuration
class AppConfig {
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
return new LettuceConnectionFactory(new RedisStandaloneConfiguration("server", 6379));
}
}
I had mistaken username for clientname set on LettuceClientConfigurationBuilder but username had to be specified on the redisstandaloneconfiguratuon.
This works for me; also please note acl was introduced only after lettuce 2.4.1 so any prior version will not work.
redisStandaloneConfiguration.setUsername(connectionFactoryConfigs.getUserName());

Unable to connect to Locator via GFSH

I have started a GemFire Server and Locator via Spring Boot and when I try to connect to the Locator from GFSH, I getting the following issue:
gfsh> connect
Connecting to Locator at [host=localhost, port=10334] ..
Connection refused: connect
Below, is the Spring (Java) configuration:
#Configuration
#ComponentScan
#EnableGemfireRepositories(basePackages= "com.gemfire.demo")
#CacheServerApplication(locators = "localhost[10334]")
#EnableManager
public class GemfireConfiguration {
#Bean
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("name", "SpringDataGemFireApplication");
gemfireProperties.setProperty("mcast-port", "0");
gemfireProperties.setProperty("log-level", "info");
return gemfireProperties;
}
#Bean
#Autowired
CacheFactoryBean gemfireCache() {
CacheFactoryBean gemfireCache = new CacheFactoryBean();
gemfireCache.setClose(true);
gemfireCache.setProperties(gemfireProperties());
return gemfireCache;
}
#Bean(name="employee")
#Autowired
LocalRegionFactoryBean<String, Employee> getEmployee(final GemFireCache cache) {
LocalRegionFactoryBean<String, Employee> employeeRegion = new LocalRegionFactoryBean<String, Employee>();
employeeRegion.setCache(cache);
employeeRegion.setClose(false);
employeeRegion.setName("employee");
employeeRegion.setPersistent(false);
employeeRegion.setDataPolicy(DataPolicy.PRELOADED);
return employeeRegion;
}
}
Ref:Spring Data Gemfire locator
as per John's advice, I have enabled the Manager though I am still unable to connect.
You are not able to connect to the Locator (using Gfsh) because you don't have a Locator (service, either standalone or embedded) running with just the Spring (Java) config shown above.
Note that the #CacheServerApplication(locators = "localhost[10334]") annotation, specifically with the locators attribute as you have configured above, does NOT start an embedded Locator. It simply allows this Spring Boot configured and bootstrapped Apache Geode or Pivotal GemFire peer Cache node to join an existing distributed system (cluster) with an "existing" Locator running on localhost, listening on port 10334.
For instance, you could have started a Locator using Gfsh (e.g. start locator --name=X ...), then started your Spring Boot application with the Spring (Java) config shown above and you would see the Spring Boot app as part of the cluster formed by the Gfsh started Locator.
It is simply a shortcut (convenience) to configure and start an "embedded" Locator, but to do so, you need to use the #EnableLocator annotation.
Therefore, to configure and start an (embedded) Locator service in the same Spring Boot application as the CacheServer (and Manager), you must also include the #EnableLocator annotation, like so:
#SpringBootApplicaton
#CacheServerApplication
#EnableLocator
#EnableManager(start = true)
public class GemFireServerApplication {
...
}
I have plenty of examples of this here, for instance here, and talk about this generally here, etc.
As a side note, your whole configuration (class) is confused and it is clear you don't quite understand what you are doing. For instance, declaring the gemfireProperties and gemfireCache beans in JavaConfig is redundant and unnecessary since you are using the #CacheServerApplication annotation. Your whole configuration could be simplified to:
#CacheServerApplication(
name = "SpringDataGemFireApplication",
locators = "localhost[10334]",
logLevel = "info"
)
#EnableLocator
#EnableManager(start = true)
#EnableGemfireRepositories(basePackages= "com.gemfire.demo")
#ComponentScan
public class GemfireConfiguration {
#Bean(name="employee")
LocalRegionFactoryBean<String, Employee> getEmployee(GemFireCache cache) {
LocalRegionFactoryBean<String, Employee> employeeRegion =
new LocalRegionFactoryBean<String, Employee>();
employeeRegion.setCache(cache);
employeeRegion.setClose(false);
employeeRegion.setName("employee");
employeeRegion.setPersistent(false);
employeeRegion.setDataPolicy(DataPolicy.PRELOADED);
return employeeRegion;
}
}
Two things:
1) First, I would be highly careful about using classpath component scanning (#ComponentScan). I am not a fan of this configuration approach, especially in production where things should be explicit.
2) I would encourage your to considering using the type-safe basePackageClasses attribute on the #EnableGemFireRepositorities annotation rather than the basePackages attribute. With basePackageClasses, you only need to refer to a single interface/class in the desired package (e.g. com.gemfire.demo) rather than every interface/class. The referenced interface/class serves as a pointer to identify the package to scan from, including all sub-packages. It is type-safe and when your interfaces/classes in that package are re-located, then your attribute is still valid after the refactoring.
Anyway...
Hope this helps.
-j

Camel: use datasource configured by spring-boot

I have a project and in it I'm using spring-boot-jdbc-starter and it automatically configures a DataSource for me.
Now I added camel-spring-boot to project and I was able to successfully create routes from Beans of type RouteBuilder.
But when I'm using sql component of camel it can not find datasource. Is there any simple way to add Spring configured datasource to CamelContext? In samples of camel project they use spring xml for datasource configuration but I'm looking for a way with java config. This is what I tried:
#Configuration
public class SqlRouteBuilder extends RouteBuilder {
#Bean
public SqlComponent sqlComponent(DataSource dataSource) {
SqlComponent sqlComponent = new SqlComponent();
sqlComponent.setDataSource(dataSource);
return sqlComponent;
}
#Override
public void configure() throws Exception {
from("sql:SELECT * FROM tasks WHERE STATUS NOT LIKE 'completed'")
.to("mock:sql");
}
}
I have to publish it because although the answer is in the commentary, you may not notice it, and in my case such a configuration was necessary to run the process.
The use of the SQL component should look like this:
from("timer://dbQueryTimer?period=10s")
.routeId("DATABASE_QUERY_TIMER_ROUTE")
.to("sql:SELECT * FROM event_queue?dataSource=#dataSource")
.process(xchg -> {
List<Map<String, Object>> row = xchg.getIn().getBody(List.class);
row.stream()
.map((x) -> {
EventQueue eventQueue = new EventQueue();
eventQueue.setId((Long)x.get("id"));
eventQueue.setData((String)x.get("data"));
return eventQueue;
}).collect(Collectors.toList());
})
.log(LoggingLevel.INFO,"******Database query executed - body:${body}******");
Note the use of ?dataSource=#dataSource. The dataSource name points to the DataSource object configured by Spring, it can be changed to another one and thus use different DataSource in different routes.
Here is the sample/example code (Java DSL). For this I used
Spring boot
H2 embedded Database
Camel
on startup spring-boot, creates table and loads data. Then camel route, runs "select" to pull the data.
Here is the code:
public void configure() throws Exception {
from("timer://timer1?period=1000")
.setBody(constant("select * from Employee"))
.to("jdbc:dataSource")
.split().simple("${body}")
.log("process row ${body}")
full example in github

how to modify tomcat8 acceptCount in spring boot

how to modify the tomcat default thread count using spring boot?
when i use spring mvc,i can find the tomcat,and modify the in conf/server.xml,then i modify the maxProcessors and acceptCount,but in spring boot i can't do that.
in org.apache.catalina.connector, i can't find the properties.
try to check what everything you can modify via properties: http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#common-application-properties
server.tomcat.max-threads = 0 # number of threads in protocol handler
otherwise you will have to get your hands dirty with programmatic configuration - http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-configure-tomcat by providing your own TomcatEmbeddedServletContainerFactory
acceptCount not support to modify in properties files, you can you following code to modify:
#Bean
public TomcatEmbeddedServletContainerFactory tomcatEmbeddedServletContainerFactory() {
TomcatEmbeddedServletContainerFactory tomcatFactory = new TomcatEmbeddedServletContainerFactory();
tomcatFactory.addConnectorCustomizers(new TomcatConnectorCustomizer() {
#Override
public void customize(Connector connector) {
//tomcat default nio connector
Http11NioProtocol handler = (Http11NioProtocol)connector.getProtocolHandler();
//acceptCount is backlog, default value is 100, you can change which you want value in here
handler.setBacklog(100);
}
});
return tomcatFactory;
}
In current spring boot it should be possible through server.tomcat.accept-count application property, see: https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#server-properties

Resources