Apache Ignite Limiting Connection to Persistent Store - spring

Apache Ignite is currently creating multiple connections to the persistent store (Mongo). Is there a way to configure the cache's to use the same Mongo connection instead of spawning new ones?
I have looked through the documentation and api's but was not able to find ways to configure the way connections to stores are made.

Related

Access to Redis Cluster via Single Endpoint

I have a application using Redis. This system implemented with java spring used jedis package for connection to the redis with the configuration as follow
jedis.pool.host=redisServer-IP
so the application connect to redis server on the redisServer-IP and works fine but, for the lack of memory on a single server and and HA capability I need to use a redis cluster I used docker compose to create a redis cluster using the here.
Also redis cluster working fine with three masters and three replicas.
I just need to understand, the Redis Cluster can work with the single endpoint, because I can only set single endpoint in the above jedis.pool.host configuration, or I need to have a proxy to deal with the redis cluster ?
NOTE: I can not make any changes in my application

Connection pooling with HikariCP, springboot and kubernetes

I am using hikariCP for connection pooling in my reactive spring boot application running in kubernetes cluster. There will be lots of blocking calls and multiple database queries, so ideally more no of database connections would help, provided the availability of cpu cores.
Providing all the cpu core to one kubernetes container will waste resource as the spike in requests will not always be there. So I am trying to explore how to utilize the autoscaler in kubernetes so that new application containers can be spinned up with increase in the no of requests. Two concerns:
I tried the hikari configuration com.zaxxer.hikari.blockUntilFilled=true to keep the no of connections filled up during the application startup. But when using autoscaler with increasing no of requests, this will cause delays in the response as connection creation in the pool would take time. Is it better to use hikari's dynamic connection creation based on spike in demand rather than creating all the connections at once (during the startup).
Also, each kubernetes container will be a new instance of application, how do we manage the no of database connections created.
I did a sample load test with jmeter and could see improved performance (and no timeouts etc) with large no of requests when using a fixed no of active database connections. There were large no of thread interrupted exceptions when there was no fixed connection pool size provided and connections were getting created dynamically with increased no of requests.
Any insights will help.

How to use Apache Ignite as a layer between Spring Boot app and MongoDB?

I have a Spring Boot application that uses MongoDB. My plan is to store data in a distributed caching system before it gets inserted into Mongo. If the database fails, the caching will have a queue and send to the DB once it is up. So, the plan is to make the caching layer in between the application and Mongo.
Can you suggest some ideas on how to implement this using Apache Ignite?
Take a look at write-behind cache store mode. It retries writing to the underlying database if insertion to the underlying DB fails. Let me know how it works for you.
You can also implement a custom CacheStore for an Ignite cache that will do the caching and enable write through for it. If the connection is lost, then you'll be able to collect entries in a buffer, while retrying to establish the connection back.
See more: https://apacheignite.readme.io/docs/3rd-party-store

spring security redis token store in clustered redis

I am trying to deploy a spring-security server, with Redis as token store.
In order to have some redundancy in Redis, we want to deploy it as a cluster.
The problem is Jedis, which is used by spring security as underlying library, doesn't support pipelining in cluster mode, but spring security uses pipelining.
My question is how can I solve this situation. More precisely:
1- Should I use another mode of deployment form Redis? What actually works.
2- Can I somehow force spring security to use reddison for connecting to Resid?
Please adivse.
If you want redundancy, use replication (master/slave) not cluster.
If you have more data than RAM on a machine, use cluster.
If you have more data than RAM on a machine and want redundancy, use cluster with replication.
Jedis supports replication with sentinel, so give that a go unless you have a lot of data. Some more info on usage here: https://github.com/xetorthio/jedis/issues/725

communication between spring instances behind a load balancer

I have a few instances of Spring apps running behind a load balancer. I am using EHCache as a caching system on each of these instances.
Let's say I receive a request that is refreshing a part of the cache on one instance. I need a way to tell the other instances to refresh their cache (or to replicate it).
I'm more interested in a solution based on Spring and not just cache replication and that's because there are other scenarios similar with this one that require the same solution.
How can I achieve this?
There is no simple Spring solution for this. Depends on the requirements. You can use any kind of PubSub like a JMS topic to notify your nodes. This way the problem can be that you cannot guarantee consistency. The other nodes can still read the old data for a while. In my current project we use Redis. We configured it as cache with Spring Data Redis and theres no need to notify the other nodes since the cache is shared. In non cache scenarios we also use redis as a PubSub service.

Resources