I have launched 3 Amazon EC2 instance and setup datastax cassandra as follows
1.Region - US EAST:
cassandra.yaml - configuration
a.listen_address as private IP of this instance
b.broadcast_address as public IP of this instance
c.seeds as 50.XX.XX.X1, 50.XX.XX.X2 (public-ip of node1,public-ip of node2)
cassandra-rackdc.properties - configuration
dc=DC1
rack=RAC1
dc_suffix=US_EAST_1
2.Region - US WEST:
I did same procedure as I did above.
3.Region - EU IRELAND:
The result of above configuration is
All the node working good individually. But when I do
$nodetool status on all the three node
It only listing the local node only.
I tried to achieve the following things.
1. Launch 3 cassandra node in three different region. For say, US-EAST,US-WEST,EU-IRELAND.
With Following configuration or methodology
a.Ec2MultiRegionSnitch
b.Replication staragey as SimpleStrategy
c.Replication Factor as 3
d. Read & write level as QUORUM.
I wish to attain only one thing i.e. if any two of the region is down or any two of the node down, I can survive with renaming one node.
My Questions here are
Where I did the mistake? and How to attain my requirements?
Any help or inputs are much appreciated.
Thanks.
This is what worked for me with cassandra 3.0
endpoint_snitch: Ec2MultiRegionSnitch
listen_address: <leave_blank>
broadcast_address: <public_ip_of_server>
rpc_address: 0.0.0.0
broadcast_rpc_address: <public_ip_of_server>
-seed: "one_ip_from_other_DC"
Finally, I found the resolution of my issue. I am using replication strategy as SimpleStrategy, hence I do not require to configure cassandra-rackdc.properties.
Once, I removed the file cassandra-rackdc.properties from all node, Everything working as expected.
Thanks
Related
I have tried giving the following configurations in the elasticsearch.yaml file
network.host: aa.bbb.ccc.dd that being my IPv4 Address
and http.port: 9200
The response to this is as follows when I try to run elasticsearch.bat on my windows machine:
the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
I am really not quite sure what to configure for the cluster initialization. The default values are discovery.seed_hosts: ["host1", "host2"] and cluster.initial_master_nodes: ["node-1", "node-2"]
In short, if you are running Elasticsearch locally(single node) or just with a single node on the cloud then just use below config in your elasticsearch.yml to avoid the production check, and to make it work, more info about this config in this SO answer:
discovery.type: single-node
This is the configuration I did since I had only one machine that had the Elastic Search db in it (1 node Only).
node.data : true
network.host : 0.0.0.0
discovery.seed_hosts : []
cluster.initial_master_nodes : []
Elasticsearch 7 requires information to form a cluster. This is provided by the following two properties in elasticsearch.yml
cluster.initial_master_nodes : This is used to provide the initial set of nodes whose vote will be consider in master election process.
discovery.seed_hosts : This is used to provide the set of nodes which are master eligible. This should contain the name of all nodes which are master eligible.
So for example you are forming a cluster with three nodes : n0, n1, n2 which are master eligible then you config will look something like this:
cluster.initial_master_nodes: ["n0", "n1", "n2"]
discovery.seed_hosts: ["n0", "n1", "n2"]
Note: cluster.initial_master_nodes is used only once by elastic which is very first time of cluster formation.
For more detailed information read this guide.
I have also faced the same issue with the elastic-search 7.6.2 version. The solution of the above-mentioned problem is, you just need to either add "discovery.seed_hosts : 127.0.0.1:9300" or set discovery.type: single-node in eleasticsearch.yml file to avoid the production use error.
Click here for discovery and cluster formation settings.
I have provided the detailed answer here.
I am adding my answer from docker container perspective. I initially tried running 3 nodes of elasticsearch in a same cluster and then tried running only 1 and faced same issue. To resolve, I deleted docker volumes. Please note, my docker elasticsearch nodes had no data so there was no data loss due to docker volume deletion.
https://discuss.elastic.co/t/how-my-config-file-should-be-on-publish-mode-with-a-single-node/189034
To all those SymmetricDS nerds over there, this one's for you all.
Right, so we have a main db, DB-01. We have 3 instances of our application running namely R1,R2,R3. Each instance has its own in-memory db namely D1,D2,D3 which it(application) is accessing respectively. We are using SymmetricDS to do one-way sync from DB-01 to D1,D2,D3. So, there is a server node, corporate C0, pointing to DB-01 and 3 client nodes, stores S1,S2,S3 pointing to D1,D2,D3 respectively.
All is working fine.
But now, we would like to introduce High Availability and there by FAILOVER into this topology i.e., at any time there will be 2 server nodes running, say Master and Slave, that would be accessing the same DB-01. If Master server goes down, clients should automatically connect to the Slave node and continue operation.
What all might be the configuration changes required to accomplish this? Are there any examples or documentations that i can reproduce to understand this concept?
We do this via clustering with 2 SymmetricDS services running on 2 app servers pointing to the High Availability (HA) connections. Then all you need is HA connections to failover like normal and Symmetric DS clustering does the rest.
Link for the user manual on clustering.
https://www.symmetricds.org/doc/3.13/html/user-guide.html#_clustering
EDIT let me get some configs for you on here service 1:
engine.name=<SDS_SERVICE_1>
db.driver=net.sourceforge.jtds.jdbc.Driver
db.url=jdbc:jtds:sqlserver://<HA_connection1>:1433/<DB>;useCursors=true;bufferMaxMemory=10240;lobBuffer=5242880
db.user=***********
db.password=***********
registration.url=http://<IP>:7004/sync/<SDS_MAIN>
sync.url=http://<IP>:7004/sync/<SDS_SERVICE_1>
group.id=<GID>
external.id=100
auto.registration=true
initial.load.create.first=true
sync.table.prefix=sym
start.initial.load.extract.job=false
cluster.lock.enabled=true
cluster.server.id=11
cluster.lock.timeout.ms=600000
cluster.lock.refresh.ms=60000
compression.level=-1
compression.strategy=0
Service 2:
engine.name=<SDS_SERVICE_2>
db.driver=net.sourceforge.jtds.jdbc.Driver
db.url=jdbc:jtds:sqlserver://<HA_connection2>:1433/<DB>;useCursors=true;bufferMaxMemory=10240;lobBuffer=5242880
db.user=***********
db.password=***********
registration.url=http://<IP>:7004/sync/<SDS_MAIN>
sync.url=http://<IP>:7004/sync/<SDS_SERVICE_2>
group.id=<GID>
external.id=100
auto.registration=true
initial.load.create.first=true
sync.table.prefix=sym
start.initial.load.extract.job=false
cluster.lock.enabled=true
cluster.server.id=12
cluster.lock.timeout.ms=600000
cluster.lock.refresh.ms=60000
compression.level=-1
compression.strategy=0
i have installed cassandra on two individual nodes both on Amazon.when i am trying to configure nodes to form a cluster the nodes. I am receiving the following error.
ERROR [main] 2016-05-12 11:01:26,402 CassandraDaemon.java:381 - Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Cannot change the number of tokens from 1 to 256.
I using these setting in cassandra.yaml file
listen_address and rpc_address to : private Ip address
seeds : Public Ip [Elastic Ip address]
num_tokens: 256
This message usually appears when num_tokens is changed after the node has been bootstrapped.
The solution is:
Stop Cassandra on all nodes
Delete the data directory (inc. datafiles, commitlog and saved_caches)
Double check that num_tokens is set to 256, initial_token is commented out and auto_bootstrap is set to true in cassandra.yaml
Start Cassandra on all nodes
This will wipe your existing cluster and cause the nodes to bootstrap from scratch again.
Cassandra doesn't support changing between vnodes and static tokens after a datacenter is bootstrapped. If you need to change from vnodes to static tokens or vice versa in an already running cluster, you'll need to create a second datacenter using the new configuration, stream your data across, and then decomission the original nodes.
I ran a Spark cluster of 12 nodes (8G memory and 8 cores for each) for some tests.
I'm trying to figure out why data localities of a simple wordcount app in "map" stage are all "Any". The 14GB dataset is stored in HDFS.
I have run into the same problem and in my case it was a problem with the configuration. I was running on the EC2 and I had a name mismatch. Maybe the same thing happened to you.
When you check how HDFS sees you cluster it should be something along this lines:
hdfs dfsadmin -printTopology
Rack: /default-rack
172.31.xx.xx:50010 (ip-172-31-xx-xxx.eu-central-1.compute.internal)
172.31.xx.xx:50010 (ip-172-31-xx-xxx.eu-central-1.compute.internal)
And the same should be seen in executors' address in the UI (by default it's http://your-cluster-public-dns:8080/).
In my case I was using public hostname for spark slaves. I have changed my SPARK_LOCAL_IP in $SPARK/conf/spark-env.sh to use the private name as well, and after that change I get NODE_LOCAL most of the times.
I encounter the same problem today. This is my situation:
My cluster have 9 workers(each setup one executor by default) ,when i set --total-executor-cores 9, the Locality lever is NODE_LOCAL, but when i set the total-executor-cores below 9 such as --total-executor-cores 7, then Locality lever become ANY, and the total time cost is 10X than NODE_LOCAL lever. You can have a try.
I'm running my cluster on EC2s, and I fixed my problem by adding the following to spark-env.sh on the name node
SPARK_MASTER_HOST=<name node hostname>
and then adding the following to spark-env.sh on the data nodes
SPARK_LOCAL_HOSTNAME=<data node hostname>
Don't start slaves like this start-all.sh. u should start every slave alonely
$SPARK_HOME/sbin/start-slave.sh -h <hostname> <masterURI>
We are trying to run a cassandra cluster on AWS/EC2 within a standard VPC footprint (cassandra nodes on private subnets). Because this is AWS there is always a chance that an EC2 instance will terminate or reboot with no warning. I have been simulating this case on a test cluster and I am seeing things with the cluster that I thought a cluster was suppose to prevent. Specifically if a node reboots some data will go temporarily missing until the node completes its reboot. If a node terminates it appears that some data is lost forever.
For my test I just did a bunch of writes (using QUORUM consistency) to some keyspaces then interrogate the contents of those keyspaces as I bring down nodes (either through reboot or terminate). I'm just using cqlsh SELECT to do the keyspace/column family interrogation of the cluster using ONE consistency level.
Note, even though I am performing no writes to the cluster while I am doing the SELECTs rows temporarily disappear when rebooting and can permanently go missing during termination.
I thought Netflix Priam might be able to help, but sadly it doesn't work in a VPC the last time I checked.
Also, because we are using ephemeral storage instances there is no equivalent of 'shutdown' so I cannot run any scripts during reboot/terminate of an instance to perform a nodetool decommission or nodetool removenode before an instance goes away. Terminate is the equivalent of kicking the plug out of the wall.
Since I am using a replication factor of 3 and quorum/write that should mean that all data is written to at least 2 nodes. So, unless I am totally misunderstanding things (which is possible), losing one node should not mean that I lose any data for any period of time when I am using consistency level ONE for the read.
Questions
Why wouldn't a 6 node cluster with a replication factor of 3 work?
Do I need to run something like a 12 node cluster with a replication factor of 7? Don't bother telling me that will fix the problem, because it doesn't.
Do I need to use consistency level of ALL on the writes then use ONE or QUORUM on the reads?
Is there something not quite right with virtual nodes? unlikely
Are there nodetool commands besides removenode that I need to run when a node terminates to recover missing data? As mentioned earlier, when a reboot occurs, eventually the missing data reappears.
Is there some cassandra savant who can look at my cassandra.yaml file below and send me on the path to salvation?
More Info added 7/19
I don't think this is a QUORUM vs ONE vs ALL is the issue. The test I set up performs no writes to the keyspaces after the initial population of the column families. So the data has had plenty of time (hours) to make it to all the nodes as required by the replication factor. Plus the test dataset is REALLY small (2 column families with about 300-1000 values each). So in other words, the data is completely static.
The behavior I am seeing seems to be tied to the fact that the ec2 instance is no longer on the network. The reason I say this is because if I log on to a node and just do a cassandra stop I see no loss of data. But if I do the reboot or terminate I start getting the following in a stack trace.
CassandraHostRetryService - Downed Host Retry service started with queue size -1 and retry delay 10s
CassandraHostRetryService - Downed Host retry shutdown complete
CassandraHostRetryService - Downed Host retry shutdown hook called
Caused by: TimedOutException()
Caused by: TimedOutException()
So it seems to be more of a networking communication issue in that the cluster is expecting, for example 10.0.12.74, to be on the network after it has joined the cluster. If that ip is suddenly unreachable either due to reboot or termination the timeouts start happening.
When I do a nodetool status under all three scenarios (cassandra stop, reboot or terminate) the status of the node shows up as DN. Which is what you would expect. Eventually nodetool status will return to UN with cassandra start or reboot, but obviously termination always stays DN.
Details of my Configuration
Here are some details of my configuration (cassandra.yaml is at the bottom of this posting):
Nodes are running in private subnets of a VPC.
Cassandra 1.2.5 with num_tokens: 256 (virtual nodes). initial_token: (blank). I am really hoping this works because all of our nodes run in autoscaling groups so the thought that redistribution could be handle dynamically is appealing.
EC2 m1.large one seed and one non-seed node in each availability zone. (so 6 total nodes in the cluster).
Ephemeral storage, not EBS.
Ec2Snitch with NetworkTopologyStrategy and all keyspaces have replication factor of 3.
Non-seed nodes are auto_bootstraped, seed nodes are not.
sample cassandra.yaml file
cluster_name: 'TestCluster'
num_tokens: 256
initial_token:
hinted_handoff_enabled: true
max_hint_window_in_ms: 10800000
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
authenticator: org.apache.cassandra.auth.AllowAllAuthenticator
authorizer: org.apache.cassandra.auth.AllowAllAuthorizer
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
disk_failure_policy: stop
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
row_cache_provider: SerializingCacheProvider
saved_caches_directory: /opt/company/dbserver/caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "SEED_IP_LIST"
flush_largest_memtables_at: 0.75
reduce_cache_sizes_at: 0.85
reduce_cache_capacity_to: 0.6
concurrent_reads: 32
concurrent_writes: 8
memtable_flush_queue_size: 4
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: LISTEN_ADDRESS
start_native_transport: false
native_transport_port: 9042
start_rpc: true
rpc_address: 0.0.0.0
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
thrift_max_message_length_in_mb: 16
incremental_backups: true
snapshot_before_compaction: false
auto_bootstrap: AUTO_BOOTSTRAP
column_index_size_in_kb: 64
in_memory_compaction_limit_in_mb: 64
multithreaded_compaction: false
compaction_throughput_mb_per_sec: 16
compaction_preheat_key_cache: true
read_request_timeout_in_ms: 10000
range_request_timeout_in_ms: 10000
write_request_timeout_in_ms: 10000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 10000
cross_node_timeout: false
endpoint_snitch: Ec2Snitch
dynamic_snitch_update_interval_in_ms: 100
dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
index_interval: 128
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: all
I think http://www.datastax.com/documentation/cassandra/1.2/cassandra/dml/dml_config_consistency_c.html will clear up a lot of this. In particular, QUORUM/ONE is not guaranteed to return the most recent data. QUORUM/QUORUM is. So is ALL/ONE, but that will be intolerant to failure on write.
Edit to go with the new information:
CassandraHostRetryService is part of Hector. I assumed you were testing with cqlsh like a sane person would. Lessons:
Use cqlsh for testing
Use the DataStax Java Driver for building your application, which is faster, easier to use, and has more insight into the cluster state than Hector thanks to the native protocol it's built on.