Whenever I restart hazelcast server, without restarting client in spring boot. I'm getting following error :
03-01-2018 16:44:17.966 [http-nio-8080-exec-7] ERROR o.a.c.c.C.[.[.[.[dispatcherServlet].log - Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.hazelcast.client.HazelcastClientNotActiveException: Partition does not have owner. partitionId : 203] with root cause
java.io.IOException: Partition does not have owner. partitionId : 203
at com.hazelcast.client.spi.impl.ClientSmartInvocationServiceImpl.invokeOnPartitionOwner(ClientSmartInvocationServiceImpl.java:43)
at com.hazelcast.client.spi.impl.ClientInvocation.invokeOnSelection(ClientInvocation.java:142)
at com.hazelcast.client.spi.impl.ClientInvocation.invoke(ClientInvocation.java:122)
at com.hazelcast.client.spi.ClientProxy.invokeOnPartition(ClientProxy.java:152)
at com.hazelcast.client.spi.ClientProxy.invoke(ClientProxy.java:147)
at com.hazelcast.client.proxy.ClientMapProxy.getInternal(ClientMapProxy.java:245)
at com.hazelcast.client.proxy.ClientMapProxy.get(ClientMapProxy.java:240)
at com.hazelcast.spring.cache.HazelcastCache.lookup(HazelcastCache.java:139)
at com.hazelcast.spring.cache.HazelcastCache.get(HazelcastCache.java:57)
at org.springframework.cache.interceptor.AbstractCacheInvoker.doGet(AbstractCacheInvoker.java:71)
If I enabled hot-restart, the issue is solved. But is there a way to resume client application without restarting it and hot-restart is disabled ?
Hazelcast client tries to reconnect to the cluster if the connection drops. It uses ClientNetworkConfig.connectionAttemptLimit and ClientNetworkConfig.connectionAttemptPeriod elements to configure how frequently it will try. connectionAttemptLimit defines the number of attempts on a disconnection and connectionAttemptPeriod defines the period between two retries in ms. Please see the usage example below:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().setConnectionAttemptLimit(5);
clientConfig.getNetworkConfig().setConnectionAttemptPeriod(5000);
Starting with Hazelcast 3.9, you can use reconnect-mode property to configure how the client will reconnect to the cluster after it disconnects. It has three options:
The option OFF disables the reconnection.
ON enables reconnection in a blocking manner where all the waiting invocations will be blocked until a cluster connection is established or failed.
The option ASYNC enables reconnection in a non-blocking manner where all the waiting invocations will receive a HazelcastClientOfflineException.
Its default value is ON. You can see a configuration example below:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getConnectionStrategyConfig()
.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ON);
By using these configuration elements, you can resume your client without restarting it.
Related
I am using ASB for enqueue/dequeue messages between components. Component A consumes messages from queue A and produces messages from queue B. It consumes then produces when consume. Both queues are using same Azure service bus with different queue name (A and B).
My problem is that once the component becomes idle for more than like 10~15 minutes, and then tries to consumes/produces, it throws
javax.jms.IllegalStateException: The MessageProducer was closed due to an unrecoverable error.
and
Caused by: javax.jms.JMSException: The link 'G5S1:40611071:qpid-jms:sender:ID:bbc3fc62-4377-4aeb-bb80-117d74e780de:1:47:1:queueB' is force detached. Code: publisher(link578). Details: AmqpMessagePublisher.IdleTimerExpired: Idle timeout: 00:10:00. [condition = amqp:link:detach-forced]
After observing the stack trace and behavior, it is having problems when it tries to produce a message to queueB. For consumer and producer, I am using same name bean cachingConnectionFactory() in MessageGateway beans (not sure it matters).
My guess is that when it consumes, it restart connection with queueA, and when it tries to restart connection with queueB, something is wrong.
Anyone has any idea? If need more information, please let me know.
Did you try to disable the cache producers? something similar to :
CachingConnectionFactory connectionFactory = (CachingConnectionFactory) jmsTemplte.getConnectionFactory();
connectionFactory.setCacheProducers(false);
I have a remote java client which looks up a JMS connection factory on Wildfly 10, and everything works fine as expected. It is just a test program; a simple JMS chat system. When I start more than one instance of the chat client sometimes the following message appears:
WARN: AMQ212051: Invalid concurrent session usage. Sessions are not supposed to be used by more than one thread concurrently.
Followed by a trace.
Now I want to fix this warning, therefore I need a pooled connection factory. But the pooled connection factory isn't available remotely (and as I read it shouldn't be available remotely).
What can I do to fix this warning when I want to start multiple JMS chat clients locally?
I know that the error won't appear when I just different machines.
This is the working non-pooled remote code (but with warning)
final Properties properties = new Properties();
properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");
properties.put(Context.PROVIDER_URL, "http-remoting://127.0.0.1:8080");
try {
context = new InitialContext(properties);
ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("jms/RemoteConnectionFactory");
jmsContext = connectionFactory.createContext("quickstartUser", "quickstartPwd1!");
} catch (NamingException e) {
e.printStackTrace();
}
The problem isn't caused by not using a pooled connection factory and won't be solved by using a pooled connection factory. The problem is the way your application is using the same JMS session concurrently between multiple threads (as the WARN message indicates). The stack-trace which is logged will show you which class & method is triggering the WARN message.
You need to ensure that your application does not use the same JMS session concurrently between multiple threads. You can do this by giving each thread its own JMS session or by setting up concurrency controls around the session so that only one thread at a time can access it.
We are using Redis from the Spring boot app and we are getting below alert like flood
Exception occurred while querying cache : class org.springframework.data.redis.RedisConnectionFailureException Message: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the poolCause: redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the poolMessage:Could not get a resource from the pool",
is it because of that their are no connections in the Redis Server ? or any other reason ?
How to find number of connections max available ? How to find how many are free ?
Could not get a resource from the pool
You have ran out of connections in Jedis pool on client side. Possible fixes:
Return connections to the pool properly (pool.returnResource()), if you are not doing it. Don't hold them when they are not needed. Don't disconnect regulairly. Be sure that commons-pool version is at least 1.6.
Increase pool size.
JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxTotal(...);
Increase the time to wait when there are no connections available.
poolConfig.setWhenExhaustedAction(GenericObjectPool.WHEN_EXHAUSTED_BLOCK);
poolConfig.setMaxWait(...);
Update:
For server-side limitations see here: https://stackoverflow.com/a/51539038/78569
Is there a programmatic(properties based) way of disabling RabbitAutoConfiguration in spring boot (1.2.2).
Looks like spring.rabbitmq.dynamic=false disables just the AmqpAdmin but not the connection factory etc.
We want a model where app properties might be sourced from spring cloud config (includes control bus) or via -D jvm args. This decision is made at deployment time.
When properties are sourced from -D jvm args, we disable the spring cloud config client but rabbit keeps throwing exceptions such as :
[org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer] - [Co
nsumer raised exception, processing can restart if the connection factory suppor
ts it. Exception summary: org.springframework.amqp.AmqpConnectException: java.ne
t.ConnectException: Connection refused: connect]
First you need to exclude RabbitAutonfiguration from your app
#EnableAutoConfiguration(exclude=RabbitAutoConfiguration.class)
Then you can import it based on some property like this
#Configuration
#ConditionalOnProperty(name="myproperty",havingValue="valuetocheck",matchIfMissing=false)
#Import(RabbitAutoConfiguration.class)
class RabbitOnConditionalConfiguration{
}
I have a problem that I don't know how solve and researching the net has not helped me much. I declare in glassfish 4.0 asadmin console a serializable connection pool and its corresponding resource.
create-jdbc-connection-pool --datasourceclassname oracle.jdbc.xa.client.OracleXADataSource --maxpoolsize 8 --isolationlevel serializable --restype javax.sql.XADataSource --property Password=A_DB:User=A_DB:URL="jdbc\:oracle\:thin\:#localhost\:1521\:orcl" ATestPool
create-jdbc-resource --connectionpoolid ATestPool jdbc/ATest
Then inside a stateless bean I build a datasource via jndi as follows:
InitialContext ic = new InitialContext();
jndiDataSource = (DataSource) ic.lookup("jdbc/ATest");
and I'm getting connection as follows
jndiDataSource.getConnection();
Connections are properly obtained and released via finally clauses in each method we they are needed.
However, pairing serializable connection pool with XA data sources seems not to work, as getting first connections throws the following pair of exceptions in the order shown below
JTS5041: The resource manager is doing work outside a global transaction
oracle.jdbc.xa.OracleXAException
at oracle.jdbc.xa.OracleXAResource.checkError(OracleXAResource.java:1110)
RAR5029:Unexpected exception while registering component
javax.transaction.SystemException
at com.sun.jts.jta.TransactionImpl.enlistResource(TransactionImpl.java:224)
with the following
RAR7132: Unable to enlist the resource in transaction. Returned resource to pool. Pool name: [ ATestPool ]]]
RAR5117 : Failed to obtain/create connection from connection pool [ ATestPool ]. Reason : com.sun.appserv.connectors.internal.api.PoolingException: javax.transaction.SystemException]]
RAR5114 : Error allocating connection : [Error in allocating a connection. Cause: javax.transaction.SystemException]]].
Now if the connection pool is recreated without --isolationlevel serializable, then application works fine without any changes into the code. Also, if one keeps the isolation parameter and uses non-XA transactions as
--datasourceclassname oracle.jdbc.pool.OracleDataSource
--restype javax.sql.DataSource
then again application works without any changes into the code.
I was wondering if anyone could explain to me what could be wrong in the above setup and how to actually make serializable work with XA data sources. Thanks.
I think you need to enable useNativeXA.