Keyspaces in same cassandra cluster with different passwords - spring-boot

I am connecting to two different keyspaces with different credentials. These keyspaces are in the same cluster.
Currently, I am needing to create two different cluster beans in my spring boot app to achieve this as the credentials are set in the cluster. Credentials are not set in the session object.
For this scenario is it right to have two separate cluster beans? Can I avoid making two different cluster beans?

In Cassandra there is no such things as a password on the keyspace. Password is set for a user that has some roles, and then this role is given particular access to the keyspace - modify, read, etc. So to have different access rights to different keyspaces, you need to have different users, and to connect as different user from the same application, you need to have different Cluster objects for every user (except the case if you're using DSE with DSE Java driver, where you can have so-called proxy user).

You can have a single spring Cassandra Cluster bean and create a two separate session beans by using the single cluster bean and by setting the corresponding key space while creating the session beans. A sample implementation from spring data docs

Related

How can Spring JPA use two different database connections for read and write operations?

we use in our backend AWS Aurora which has two instances, one for read+write-access and one only for read access.
In our backend we use Spring data JPA to manage our Entities in the database. Would it now be possible, that spring will use the read node for selecting data and the read/write node only for write operations?
How could we configure that? Therefore the read instance has another hostname this instance do nothing.
Thx.

What should be the TransactionIdPrefix for multiple spring boot consumer/produces apps which are connected to kafka (3 brokers))

I am having multiple spring boot applications which are connected to kafka (clustrized with 3 brokers)and also i integrated transaction synchronization (chainedKafkaTransactionManager). so i want to know should i give the same TransactionIdPrefix value in kafka config for all the multiple application or diffrent one.
i tried giving a random generated TransactionIdPrefix for each application. but i think in some time in multi thread environment in Listeners method it will take old data from database (jpa repositories)
is it problem because of diffrent TransactionIdPrefix ?
It depends; if they are multiple instances of the same app and the transactions are started by consumers, the prefix must be the same, so that zombie fencing is handled properly when partitions move from one instance to another after a rebalance.
If the transactions are started by producers, the prefix must be unique in each instance.
If they are different applications they should have different prefixes, regardless of what starts the transaction.

How to connect to several cassandra clusters at same time with Spring Boot Starter Data Cassandra

I need to connect to different cassandra clusters depended on input data. I have idea how to achieve that with manually creating cassandraTemplate for each cluster. but what about spring-boot-starter-data-cassandra? Does it allow achieve same behavior?

spring security redis token store in clustered redis

I am trying to deploy a spring-security server, with Redis as token store.
In order to have some redundancy in Redis, we want to deploy it as a cluster.
The problem is Jedis, which is used by spring security as underlying library, doesn't support pipelining in cluster mode, but spring security uses pipelining.
My question is how can I solve this situation. More precisely:
1- Should I use another mode of deployment form Redis? What actually works.
2- Can I somehow force spring security to use reddison for connecting to Resid?
Please adivse.
If you want redundancy, use replication (master/slave) not cluster.
If you have more data than RAM on a machine, use cluster.
If you have more data than RAM on a machine and want redundancy, use cluster with replication.
Jedis supports replication with sentinel, so give that a go unless you have a lot of data. Some more info on usage here: https://github.com/xetorthio/jedis/issues/725

how to configure redis, hsqldb,zookeeper, multiple admin in spring xd

I want to create a distributed cluster in spring xd.
I am able to create a cluster with single admin, one zookeeper, one instance of redis and hsqldb.
But when i'm trying to do that with multiple instance of zookeeper , hsqldb, redis ,i'm not able to configure it correctly.
You should only have a single instance of zookeeper, hsqldb and redis. All xd-admins should be configured to connect to the same instance of each of these services and so should the xd-containers be.
Like Thomas has mentioned, the idea is that you have your (multiple) instances of admin and containers deployed, and all connect to the same zk,redis, hsqldb & rabbitmq.
Why do you want to start multiple instances of these applications?
Zookeeper provides the topology of the cluster and manages deployments. Also, it makes sure to note when nodes go up and down - avoiding single point of failures when you have many xd-admin instances (one is leader and the others replicate, they will become leader if the current one fails).
Or are you talking about making those instance parallel to avoid a SPOF? In that case, you should try to dedicate an entire VM for each of those applications.

Resources