How to configure Hazelcast host port in Spring Boot? - spring

I'm using hazelcast in my project and want to move hazelcast host:port information into environment variables. Before that I had default configuration that is:
<hazelcast-client xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/client-config
http://www.hazelcast.com/schema/client-config/hazelcast-client-config-3.8.xsd"
xmlns="http://www.hazelcast.com/schema/client-config">
<network>
<connection-timeout>3000</connection-timeout>
<connection-attempt-period>1000</connection-attempt-period>
<connection-attempt-limit>259200</connection-attempt-limit>
</network>
</hazelcast-client>
and I've found that there is possibility to add <cluster-members> tag inside <network> to provide custom <address> for hazelcast instances. I've modified my hazelcast.xml file into:
<network>
<cluster-members>
<address>${HAZELCAST_URL}</address>
</cluster-members>
...
But whenever I'm starting my app it shows:
2017-11-10 17:55:45 [service,,,] WARN c.h.c.s.i.ClusterListenerSupport hz.client_0 [dev] [3.8.5] Exception during initial connection to ${HAZELCAST_URL}:5701, exception java.lang.IllegalArgumentException: Can't resolve address: ${HAZELCAST_URL}:5701
2017-11-10 17:55:45 [service,,,] WARN c.h.c.s.i.ClusterListenerSupport hz.client_0 [dev] [3.8.5] Exception during initial connection to ${HAZELCAST_URL}:5702, exception java.lang.IllegalArgumentException: Can't resolve address: ${HAZELCAST_URL}:5702
It means it still tries to connect to default port and variable is not resolved. Is there a way to configure it?

You can pass java.util.Properties into the client config builder. All you need do is build it from Spring's environment.
#Bean
public ClientConfig clientConfig(Environment environment) throws Exception {
Properties properties = new Properties();
String HAZELCAST_URL = "HAZELCAST_URL";
properties.put(HAZELCAST_URL, environment.getProperty(HAZELCAST_URL));
XmlClientConfigBuilder xmlClientConfigBuilder = new XmlClientConfigBuilder("hazelcast-client.xml");
xmlClientConfigBuilder.setProperties(properties);
return xmlClientConfigBuilder.build();
}
#Bean
public HazelcastInstance hazelcastInstance(ClientConfig clientConfig) {
return HazelcastClient.newHazelcastClient(clientConfig);
}
Note, there are more elegant ways to do this, the above is just one solution keeping it as simple as possible

Related

Spring Boot #RequestScope and Hibernate schema based multi-tenancy

I'm working on a schema based multi-tenant app, in which I want to resolve the Tenant Identifier using a #RequestScope bean. My understanding is that #RequestScope uses/injects proxies for the request scoped beans, wherever they are referred (e.g. in other singleton beans). However, this is not working in the #Component that implements CurrentTenantIdentifierResolver and I get the following error when I start my service,
Caused by: org.springframework.beans.factory.support.ScopeNotActiveException: Error creating bean with name 'scopedTarget.userContext': Scope 'request' is not active for the current thread;
Caused by: java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread? If you are actually operating within a web request and still receive this message, your code is probably running outside of DispatcherServlet: In this case, use RequestContextListener or RequestContextFilter to expose the current request.
Following are the relevant pieces of code.
#Component
public class CurrentTenant implements CurrentTenantIdentifierResolver {
#Autowired
private UserContext userContext;
#Override
public String resolveCurrentTenantIdentifier() {
return Optional.of(userContext)
.map(u -> u.getDomain())
.get();
}
#Component
#RequestScope
public class UserContext {
private UUID id;
private String domain;
My questions,
Isn't the proxy for the #RequestScope injected (by default)? Do I need to do anything more?
Is Hibernate/Spring trying to establish a connection to the DB at startup (even when there is no tenant available)?
Hibernate properties:
HashMap<String, Object> properties = new HashMap<>();
properties.put("hibernate.dialect", env.getProperty("hibernate.dialect"));
properties.remove(AvailableSettings.DEFAULT_SCHEMA);
properties.put(AvailableSettings.MULTI_TENANT, MultiTenancyStrategy.SCHEMA);
properties.put(AvailableSettings.MULTI_TENANT_IDENTIFIER_RESOLVER, tenantResolver);
properties.put(AvailableSettings.MULTI_TENANT_CONNECTION_PROVIDER, connectionProvider);
For the time being, I'm preventing the NullPointerException by checking if we are in the RequestContext. However, a connection still gets established to the master database (although I've explicitly specified the dialect and am not specifying hbm2ddl.auto). Since this connection is not associated with any schema, I'd like to avoid making it, so that it does not look for any tables that it won't find anyways.
What seems to be happenning is that when a HTTP request is received, hibernate is trying to resolve the current tenant identifier, even before my #RequestScope bean is created (and even before my #RestController method is called.) If a provide the default connection to the databse, I then get the following error. If I don't provide a connection, it throws an exception and aborts.
2021-09-26 11:55:44.882 WARN 19759 --- [nio-8082-exec-2] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 0, SQLState: 42P01
2021-09-26 11:55:44.882 ERROR 19759 --- [nio-8082-exec-2] o.h.engine.jdbc.spi.SqlExceptionHelper : ERROR: relation "employees" does not exist
Position: 301
2021-09-26 11:55:44.884 ERROR 19759 --- [nio-8082-exec-2] o.t.n.controller.EmployeeController : Exception: could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not extract ResultSet

Spring rabbitmq amqp connection Factory - How override connection properties

I am trying to use SimpleMessageListenerContainer and Default connection factory, I don't want use properties using spring deault properties spring.rabbitmq.*, I would like to set connection properties at run-time when connection Factory injected. But my container try to connect to localhost, any help is much appriciated
My code example is like this
#Bean
public SimpleMessageListenerContainer queueListenerContainer(AbstractConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
connectionFactory.setHost(Arrays.toString(rabbitMqConfig.getSubscriberHosts()));
connectionFactory.setVirtualHost("hydra.services");
connectionFactory.setPort(rabbitMqConfig.getSubscriberPort());
connectionFactory.setUsername(rabbitMqConfig.getSubscriberUsername());
connectionFactory.setPassword(rabbitMqConfig.getSubscriberPassword());
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setQueueNames(rabbitMqConfig.getSubscriberQueueName());
container.setConnectionFactory(connectionFactory);
// container.setQueueNames("SampleQueue"); /*This just for testing.. !*/
container.setMessageListener(listenerAdapter);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
container.setDeclarationRetries(5);// This is default to 3, We can twick this and move this to prop
container.setPrefetchCount(100); //Tell the broker how many messages to send to each consumer in a single request.
return container;
}
But still the code this container try to connect to local host.
Logs :
[30m2019-03-17 09:35:43,335[0;39m [34mINFO [0;39m [[34mmain[0;39m] [33morg.springframework.amqp.rabbit.connection.AbstractConnectionFactory[0;39m: Attempting to connect to: [localhost:5672]
[30m2019-03-17 09:35:45,499[0;39m [34mINFO [0;39m [[34mmain[0;39m] [33morg.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer[0;39m: Broker not available; cannot force queue declarations during start
[30m2019-03-17 09:35:45,778[0;39m [34mINFO [0;39m [[34mqueueListenerContainer-1[0;39m] [33morg.springframework.amqp.rabbit.connection.AbstractConnectionFactory[0;39m: Attempting to connect to: [localhost:5672]
[30m2019-03-17 09:35:48,365[0;39m [34mINFO [0;39m [[34mmain[0;39m] [33morg.springframework.boot.StartupInfoLogger[0;39m: Started DftpEppScrubberApplication in 162.706 seconds (JVM running for 164.364)
Edit 2
This way I am getting unknowhost error. I thought initially it was firewall Issue but I checked connection everything seems to be right. I am not sure what is Problem Here!

How to ignore the spring-boot-cassandra default config to load the cassandra connection instance

I have add the dependency of cassandra starter
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra</artifactId>
<version>2.0.0.RELEASE</version>
</dependency>
but the default config is poor for me.
org.springframework.beans.factory.BeanCreationException:
Error creating bean with name 'cassandraSession'
defined in class path resource
[org/springframework/boot/autoconfigure/data/cassandra/
CassandraDataAutoConfiguration.class]:
Invocation of init method failed; nested exception is
com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed (tried: localhost/0:0:0:0:0:0:0:1:9042
(com.datastax.driver.core.exceptions.TransportException:
[localhost/0:0:0:0:0:0:0:1:9042] Cannot connect), localhost/127.0.0.1:9042
(com.datastax.driver.core.exceptions.TransportException:
[localhost/127.0.0.1:9042] Cannot connect))
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed (tried: localhost/0:0:0:0:0:0:0:1:9042
(com.datastax.driver.core.exceptions.TransportException:
[localhost/0:0:0:0:0:0:0:1:9042]
Cannot connect), localhost/127.0.0.1:9042
(com.datastax.driver.core.exceptions.TransportException:
[localhost/127.0.0.1:9042] Cannot connect))
I hope the spring application not load the cassandra connection instance(like cassandraSession) when I don't haved config the 'spring.data.cassandra.*'
What can do it?
You need to exclude CassandraDataAutoConfiguration to disable spring boot cassandra auto configuration e.g.
#SpringBootApplication
#EnableAutoConfiguration(exclude = { CassandraDataAutoConfiguration.class })
public class Application {
}
Then define your own Cassandra configuration e.g.
#Configuration
#EnableReactiveCassandraRepositories
public class CassandraConfig extends AbstractReactiveCassandraConfiguration {
}
Ended up defining my own custom cluster bean
#Configuration
#EnableReactiveCassandraRepositories
public class CassandraConfig extends AbstractReactiveCassandraConfiguration {
// read contact points from config
#Value("${spring.data.cassandra.contact-points}")
private String contactPoints;
#Override
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean bean = super.cluster();
bean.setContactPoints(contactPoints);
return bean;
}
https://github.com/spring-projects/spring-data-cassandra/blob/f3115017d4a04e105d4046f6fd716ac308ecd7aa/spring-data-cassandra/src/main/java/org/springframework/data/cassandra/config/AbstractClusterConfiguration.java#L88

Spring Boot with Embedded Mongo : Cannot assign requested address: JVM_Bind

I am trying to setup a JUnit test for a Spring Boot with embedded Mongo & Kafka :-
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE,
classes = {AccountingApplication.class})
#DataMongoTest
public class BaseEmbeddedTest {
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true);
#Autowired
private MongoTemplate mongoTemplate;
#Test
public void emptyTest(){
}
}
src/test/resources/application.yml :-
spring:
data:
mongodb:
port: 0
kafka:
bootstrap-servers: ${spring.embedded.kafka.brokers}
PROBLEM
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [de.flapdoodle.embed.mongo.config.IMongodConfig]: Factory method 'embeddedMongoConfiguration' threw exception; nested exception is java.net.BindException: Cannot assign requested address: JVM_Bind
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:189)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
... 140 more
Caused by: java.net.BindException: Cannot assign requested address: JVM_Bind
at java.net.DualStackPlainSocketImpl.bind0(Native Method)
at java.net.DualStackPlainSocketImpl.socketBind(DualStackPlainSocketImpl.java:106)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:190)
at java.net.ServerSocket.bind(ServerSocket.java:375)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at de.flapdoodle.embed.process.runtime.Network.getFreeServerPort(Network.java:80)
at org.springframework.boot.autoconfigure.mongo.embedded.EmbeddedMongoAutoConfiguration.embeddedMongoConfiguration(EmbeddedMongoAutoConfiguration.java:147)
What am I doing wrong here ?
Version:-
dependencyManagementPluginVersion = '1.0.3.RELEASE'
springBootVersion = '1.5.6.RELEASE'
springCloudVersion = 'Dalston.SR2'
projectVersion = '0.0.1-SNAPSHOT'
javaVersion = 1.8
kotlinVersion = '1.1.4'
This annotation: #DataMongoTest causes Spring Boot to create an embedded Mongo instance. The exception messages tells us that the embedded Mongo instance cannot start because there is already a process running on the port it is trying to run on.
The embedded Mongo instance is configured by EmbeddedMongoAutoConfiguration and the strategy applied by Spring Boot - for port allocation - is as follows:
if configured Mongo port > 0 then
use the configured port
else
assign a random port
end
So, I suspect that your test context is configured with a non zero value for spring.data.mongodb.port. I know you posted your application.yml which implies that you are - correctly - assigning a zero value to spring.data.mongodb.port but if you put a breakpoint inside the EmbeddedMongoAutoConfiguration constructor and peek inside the properties parameter I think you'll see that the actual value in use by that configuration class is not zero. If the port value passed to EmbeddedMongoAutoConfiguration is actually zero but you are still getting the JVM_Bind error then that implies that this call: Network.getFreeServerPort(this.getHost()) is not returning a free port and that seems unlikely.
In order to fix this issue: as long as you configure your test context with spring.data.mongodb.port=0 then the embedded Mongo instance will be assigned a random port and this random port will be made known to other aspects of your Spring context (such as your MongoTemplate) which need to talk to that Mongo instance.

Hazelcast in Spring Boot ignoring my NetworkConfig

I'm using Hazelcast as my main data store backed by JPA to a database. I'm trying to get it to not use multicast to find other instances in our development environment since we're working on different classes, etc. and pointing at our own databases, but Hazelcast is still connecting up. I know it's calling my HazelcastConfiguration class, but it's also using the hazelcast-defaults.xml in the jar file and creating a cluster.
#Bean(name = "hazelcastInstance")
public HazelcastInstance getHazelcastInstance(Config config) {
return new HazelcastInstanceFactory(config).getHazelcastInstance();
}
#Bean(name = "hazelCastConfig")
public Config config ()
{
MapConfig userMapConfig = buildUserMapConfig();
...
Config config = new Config();
config.setNetworkConfig(buildNetworkConfig());
return config;
}
private NetworkConfig buildNetworkConfig () {
NetworkConfig networkConfig = new NetworkConfig();
JoinConfig join = new JoinConfig();
MulticastConfig multicastConfig = new MulticastConfig();
multicastConfig.setEnabled(false);
join.setMulticastConfig(multicastConfig);
TcpIpConfig tcpIpConfig = new TcpIpConfig();
tcpIpConfig.setEnabled(false);
join.setTcpIpConfig(tcpIpConfig);
networkConfig.setJoin(join);
return networkConfig;
}
Now I can see that these are being called and it has to be using my configuration because the entities get backed to my database, but I also get this at startup:
2017-06-20 14:41:24.311 INFO 3741 --- [ main] c.h.i.cluster.impl.MulticastJoiner : [10.10.0.125]:5702 [dev] [3.8.1] Trying to join to discovered node: [10.10.0.127]:5702
2017-06-20 14:41:34.870 INFO 3741 --- [ached.thread-14] c.hazelcast.nio.tcp.InitConnectionTask : [10.10.0.125]:5702 [dev] [3.8.1] Connecting to /10.10.0.127:5702, timeout: 0, bind-any: true
2017-06-20 14:41:34.972 INFO 3741 --- [ached.thread-14] c.h.nio.tcp.TcpIpConnectionManager : [10.10.0.125]:5702 [dev] [3.8.1] Established socket connection between /10.10.0.125:54917 and /10.10.0.127:5702
2017-06-20 14:41:41.181 INFO 3741 --- [thread-Acceptor] c.h.nio.tcp.SocketAcceptorThread : [10.10.0.125]:5702 [dev] [3.8.1] Accepting socket connection from /10.10.0.146:60449
2017-06-20 14:41:41.183 INFO 3741 --- [ached.thread-21] c.h.nio.tcp.TcpIpConnectionManager : [10.10.0.125]:5702 [dev] [3.8.1] Established socket connection between /10.10.0.125:5702 and /10.10.0.146:60449
2017-06-20 14:41:41.185 INFO 3741 --- [ration.thread-0] com.hazelcast.system : [10.10.0.125]:5702 [dev] [3.8.1] Cluster version set to 3.8
2017-06-20 14:41:41.187 INFO 3741 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [10.10.0.125]:5702 [dev] [3.8.1]
Members [3] {
Member [10.10.0.127]:5702 - e02dd47f-7bac-42d6-abf9-eeb62bdb1884
Member [10.10.0.146]:5702 - 9239d33e-3b60-4bf5-ad81-da14524197ca
Member [10.10.0.125]:5702 - 847d0008-6540-438d-bea6-7d8b19b8141a this
}
Anyone got ideas?
An easy way to configure TCP IP Cluster is to use hazelcast.xml config file.
<hazelcast>
...
<network>
<port auto-increment="true">5701</port> // check if this is valid for the usecase
<join>
<multicast enabled="false">
</multicast>
<tcp-ip enabled="true">
<hostname>machine1</hostname>
<hostname>machine2</hostname>
<hostname>machine3:5799</hostname>
<interface>192.168.1.0-7</interface> // set values as per your env
<interface>192.168.1.21</interface>
</tcp-ip>
</join>
...
</network>
...
</hazelcast>
As configuration below shows, while enable attribute of multicast is set to false, tcp-ip has to be set to true. For the none-multicast option, all or subset of cluster members' hostnames and/or ip addresses must be listed. Note that all of the cluster members don't have to be listed there but at least one of them has to be active in cluster when a new member joins. The tcp-ip tag accepts an attribute called "conn-timeout-seconds". The default value is 5. Increasing this value is recommended if you have many IP's listed and members can not properly build up the cluster.
Loading Configuration File
Add a hazelcast.xml file to the src/main/resources folder and spring boot will auto configure hazelcast for you. You can optionally configure the location of the hazelcast.xml file in your properties or YAML file using spring.hazelcast.config configuration property.
# application.yml
spring:
hazelcast:
config: classpath:[path To]/hazelcast.xml
# application.properties
spring.hazelcast.config=classpath:[path To]/hazelcast.xml
The problem I was having was with Apache Camel and their HazelcastComponent. It doesn't automatically pick up your Hazelcast instance. When I configured the HazelcastComponent like this it started using the correct HazelcastInstance:
#Bean(name = "hazelcast")
HazelcastComponent hazelcastComponent() {
HazelcastComponent hazelcastComponent = new HazelcastComponent();
hazelcastComponent.setHazelcastInstance(hazelcastInstance);
return hazelcastComponent;
}

Resources