what is the max number of nodes in rethinkdb? - rethinkdb

I've read on rethinkdb's doc that we can have a number of nodes from one to sixteen but actually I don't know if it is a way of speaking or a real limit.
I launched 20 VirtualBox VMs to create a cluster and I found troubles to have all nodes in the cluster online at the same time, 3 or 4 nodes loose connectivity. This makes sense with the 16 limit but I havent found similar limits for other nosql databases.
Is 16 a real maximum number of nodes per cluster limit on rethinkdb?
thanks!

Short answer is: There is no hard limit.
It is written 16 machines because that is what we have tested so far.
Some tests have been run with 64 nodes and while it doesn't scale as much as it should, it still works.
RethinkDB is aiming for a smooth experience with 100 servers and 100.000 tables -- see https://github.com/rethinkdb/rethinkdb/issues/1861 to track progress.
Also if you run 20 VMs on the same machine, the host may not have enough resources to run the cluster, which would explains the timeouts.

Related

Are there anyways to speed Greenplum up?

I have 3 vms(one master two segment hosts of the opensource version), each has 32 cores 64 threads and the memory of 251G.
There is one big table,which have nearly 70 fields and one hundred million records.
The parts of definition as follows:
with (appendonly=true,compresslevel=5)
distributed by(record_id) partition by range(dt_date)
(partition p201012 start ('2021-01-01'::date) end ('2021-01-31'::date) every ('1 days'::interval))
The culster have 30 primary segments and 30 mirror segments.
Both insertion(<2000 records/s) and selection(about 25s) are too slow,since we have one hundred million records a day and more than one second is not allowed.
So my questions are: is there anyone using Gpdb? Are there anyways to speed it up?
Thank u!
My initial thought is that you have way too many primaries/mirrors (30) for that few cpus on the system. The general rule of thumb is 3-4 cpu/vcpu per segment (i.e., postgres database). Your system should be reconfigured to only have 2 primaries per host to better utilize the cpu and memory on the hosts; 1 might even be better with that small amount of memory on the segment hosts.
As it stands now, you are swamping the system with too many databases trying to utilize too few system resources.
Jim

Is one large AWS instance better than several smaller instances for the scylladb

I have one scylla db cluster with 9 nodes and RF=3 using amazon AWS i3en.xlarge instance.
I'm curious if 3 i3en.3xlarge are much better than 9 i3en.xlarge.
Full disclosure - I work on the ScyllaDB project.
Theoretically, Scylla's shard-per-core architecture means that 16 4xlarges or 4 16xlarges should perform fundamentally the same. Each vCPU performs as in independent shared-nothing shard doing its own thing. So, how those shards are configured is irrelevant.
However, in the real world, there are good reasons for scaling up, rather than scaling out. For example:
Larger nodes have better network guarantees from AWS.
Larger nodes have fewer noisy neighbor problems.
Managing a few nodes is generally easier than managing many nodes.
Generally speaking, our users have had better experiences with larger nodes. But the choice is yours.

Total amount of sessions per user for oracle cluster of 4 nodes

We have Oracle 11g Enterprise 64bit and it is a cluster of 4 nodes.
There is a user with limit of 96 sessions_per_user. We thought that the total limit of sessions for this user is 4 nodes * 96 = 384 sessions. But the reality is no more than something about 180 sessions. After approximately 180 sessions being opened we get erros:
ORA-12850: Could not allocate slaves on all specified instances: 4
needed, 3 allocated ORA-12801: error signaled in parallel query
server P004, instance 3599
ORA-02391: exceeded simultaneous
SESSIONS_PER_USER limit
The question is why the total limit is only 180 sessions? Why is it not 4*96?
We would greatly appreciate your answer.
Although I can't find it documented, a quick test implies you are correct that the maximum total number of sessions is equal to SESSIONS_PER_USER * Number of Nodes. However, that will only be true if the sessions are balanced evenly across the nodes. Each instance still enforces that limit.
Check the service you are connecting to, and if that service is available on all nodes. Run these commands to look at the preferred nodes and the actual running nodes. It's possible that there was a failure, a service migrated to one node, and never migrated back.
# Preferred nodes:
srvctl config service -d $your_db_name
# Running nodes:
srvctl status service -d $your_db_name
Or possibly the connections are hard-wired to a specific instance. This is usually a mistake, but sometimes it is necessary for things like running the PL/SQL debuggers. Run this query to see where your parallel sessions are spawning:
select inst_id ,gv$session.* from gv$session;
Also check the parameter PARALLEL_FORCE_LOCAL and make sure it is not set to true:
select value from gv$parameter where name = 'parallel_force_local';
Or perhaps there's an issue with counting the number of sessions. The number of sessions is frequently more than the requested degree of parallelism. For example, if the query sorts or hashes Oracle will double the number of parallel sessions, one set to produce the rows and one set to consume the rows. Are you sure of the number of parallel sessions being requested?
Also, in my tests, when I ran a parallel query without enough SESSIONS_PER_USER, it simply downgraded my query. I'm not sure why your database is throwing an error. (Perhaps you've got parallel queuing and a timeout set?)
Lastly, it looks like you are using an extremely high degree of parallelism. Are you sure that you need hundreds of parallel processes?
Chances are there are a lot of other potential issues I haven't thought of. Parallelism and RAC are complicated.

Cassandra partition size and performance?

I was playing around with cassandra-stress tool on my own laptop (8 cores, 16GB) with Cassandra 2.2.3 installed out of the box with having its stock configuration. I was doing exactly what was described here:
http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
And measuring its insert performance.
My observations were:
using the code from https://gist.github.com/tjake/fb166a659e8fe4c8d4a3 without any modifications I had ~7000 inserts/sec.
when modifying line 35 in the code above (cluster: fixed(1000)) to "cluster: fixed(100)", i. e. configuring my test data distribution to have 100 clustering keys instead of 1000, the performance was jumping up to ~11000 inserts/sec
when configuring it to have 5000 clustering keys per partition, the performance was reducing to just 700 inserts/sec
The documentation says however Cassandra can support up to 2 billion rows per partition. I don't need that much still I don't get how just 5000 records per partition can slow the writes 10 times down or am I missing something?
Supporting is a little different from "best performaning". You can have very wide partitions, but the rule-of-thumb is to try to keep them under 100mb for misc performance reasons. Some operations can be performed more efficiently when the entirety of the partition can be stored in memory.
As an example (this is old example, this is a complete non issue post 2.0 where everything is single pass) but in some versions when the size is >64mb compaction has a two pass process, that halves compaction throughput. It still worked with huge partitions. I've seen many multi gb ones that worked just fine. but the systems with huge partitions were difficult to work with operationally (managing compactions/repairs/gcs).
I would say target the rule of thumb initially of 100mb and test from there to find own optimal. Things will always behave differently based on use case, to get the most out of a node the best you can do is some benchmarks closest to what your gonna do (true of all systems). This seems like something your already doing so your definitely on the right path.

Setup elastic for production

I want to know what configuration setup would be ideal for my case. I have 4 servers (nodes) each with 128 GB RAM. I'll have all 4 nodes under one cluster.
Total number number of indexes would be 10, each getting data of 1500000 documents per day.
Since I'll have 4 servers (nodes) so for all these nodes I'll set master:true, and data:true, so that if one node goes down, other becomes master. Every index will have 5 shards.
I want to know which config parameters should I alter in order to gain maximum potential from elastic.
Also tell me how much memory is enough for my usage, since I'll have very frequent select queries in production (may be 1000 requests per second).
Need a detailed suggestion.s
I'm not sure anyone can give you a definitive answer to exactly how to configure your servers since it is very dependent on your data structure, mapping and specific queries.
You should read this great article series by Elastic regarding production environments

Resources