I have a multiple Spring Boot based Micro services which connect a DB2 data base (Master BD). We want to have same replica of Master DB which is called Slave DB2 DB. Every month we have some maintenance on master DB for 5-10 hrs during this time we want all our apps to automatically connect to Slave DB after this time period apps should switch back to Master without manual intervention.
Is this possible to achieve in Sprint Boot. I thought of using Spring Cloud Hystrix but is it correct architectural pattern. Any other better approach.
It's possible to do this on the infrastructure level, your apps does not need to know that there was a failover.
If you want to solve this on the application side, you can use Spring Cloud Circuitbreaker (Hystrix is deprecated, but you can use it with Resilience4J).
Related
I am developing a high performance service in spring boot. It is deployed to Openshift and running in multiple pods.
Now I need some configuration which is stored in a database and read by all pods. Data can be changed through a web app.
I would like to do some performance tuning on the database part. What is best to do?
Migrate to a H2 database running in a single pod and the other ones connect to this?
Or some kind of redis caching?
Is there any kind of best practise or recommendation to do this?
We have Spring Cloud Data Flow running in Kubernetes in order to orchestrate Spring Batch jobs. For each new file we have in, Spring Cloud Data Flow spins up a new Spring Batch task.
Spring Batch accesses database and uses the connection pool, holding (by default) 10 connections to database. That limits us the number of jobs that we can run at the same time, going against scalability principles. Only solutions we've found so far are are:
Reduce the Spring Batch connection pool: we cannot reduce it too much since we apply multithreading.
Increase the max number of connections in the database: it does not scale.
We were wondering whether there is any way of delegating the interaction of the Spring Batch database tables to Spring Cloud Data Flow through API.
Thanks.
I am trying to deploy a spring-security server, with Redis as token store.
In order to have some redundancy in Redis, we want to deploy it as a cluster.
The problem is Jedis, which is used by spring security as underlying library, doesn't support pipelining in cluster mode, but spring security uses pipelining.
My question is how can I solve this situation. More precisely:
1- Should I use another mode of deployment form Redis? What actually works.
2- Can I somehow force spring security to use reddison for connecting to Resid?
Please adivse.
If you want redundancy, use replication (master/slave) not cluster.
If you have more data than RAM on a machine, use cluster.
If you have more data than RAM on a machine and want redundancy, use cluster with replication.
Jedis supports replication with sentinel, so give that a go unless you have a lot of data. Some more info on usage here: https://github.com/xetorthio/jedis/issues/725
I want to create a distributed cluster in spring xd.
I am able to create a cluster with single admin, one zookeeper, one instance of redis and hsqldb.
But when i'm trying to do that with multiple instance of zookeeper , hsqldb, redis ,i'm not able to configure it correctly.
You should only have a single instance of zookeeper, hsqldb and redis. All xd-admins should be configured to connect to the same instance of each of these services and so should the xd-containers be.
Like Thomas has mentioned, the idea is that you have your (multiple) instances of admin and containers deployed, and all connect to the same zk,redis, hsqldb & rabbitmq.
Why do you want to start multiple instances of these applications?
Zookeeper provides the topology of the cluster and manages deployments. Also, it makes sure to note when nodes go up and down - avoiding single point of failures when you have many xd-admin instances (one is leader and the others replicate, they will become leader if the current one fails).
Or are you talking about making those instance parallel to avoid a SPOF? In that case, you should try to dedicate an entire VM for each of those applications.
Is it possible to use transactions when Neo4j is used as standalone server? I am using functions from my Spring repositories, and probably each of them is executed as a separate transaction, but I would like to merge them into one. Is it possible to do this?
SDN doesn't support remote transactions (which only work with the transactional endpoint and Cypher) yet.
So the option you have to speed your operation up is to move the processing of the SDN entities into the server an expose a domain level REST API to your clients (either with Jersey, or SD-REST).
see: http://inserpio.wordpress.com/2014/04/30/extending-the-neo4j-server-with-spring-data-neo4j/