CentOS 8 Unable to create Cluster name - cluster-computing

I have run soo many commands for creating Cluster name using PCS command but i have face this error
unabl to create cluster name using PCS actually i want to make HA using pacemaker
Error: At least 1 and at most 8 addresses must be specified for a node, 0 addresses specified for node 'OTRS_TESTING_2'
Error: All nodes must have the same number of addresses; node 'OTRS_TESTING_1' has 1 address; node 'OTRS_TESTING_2' has 0 addresses
Error: Errors have occurred, therefore pcs is unable to continue
please help me out how can i get out from this situation

Related

how to solve error CRS-0223 resource has placement error in oracle RAC?

I have error in oracle database 11g real application cluster , i have 2 nodes node1,node2
when i checked the services i found instance node2 is not running
> srvctl status database -d db
instance ins1 is runnig on node node1
instance inst2 is not running on node2
when i checked the services some services offline
>crs_stat -t
ora.node2.gsd target=offline state =offline
ora.node2.ASM2.asm state=offline
ora.node2.inst2 state=offline
i tried to start the services by using the following command
>crs_start ora.node2.gsd
but always get this error
crs-0223 : resource has placement error
how to solve this error and startup instance on node 2 ?
I restarted the servers and everything working fine without errors
I think the cluster connection between the nodes was lost and need to restart the nodes.

Hadoop Data node IP isn't a real VM

I'm currently running a hadoop setup with a Namenode(master-node - 10.0.1.86) and a Datanode(node1 - 10.0.1.85) using two centOS VM's.
When I run a hive query that starts a mapReduce job, I get the following error:
"Application application_1515705541639_0001 failed 2 times due to
Error launching appattempt_1515705541639_0001_000002. Got exception:
java.net.NoRouteToHostException: No Route to Host from
localhost.localdomain/127.0.0.1 to 10.0.2.62:48955 failed on socket
timeout exception: java.net.NoRouteToHostException: No route to host;
For more details see: http://wiki.apache.org/hadoop/NoRouteToHost"
Where on earth is this IP of 10.0.2.62 coming from? Here is an example of what I am seeing.
This IP does not exist on my network. You can not reach it through ping of telnet.
I have gone through all my config files on both master-node and node1 and I cannot find where it is picking up this IP. I've stopped/started both hdfs and yarn and rebooted both the VM's. Both /etc/host files are how they should be. Any general direction on where to look next would be appreciated, I am stumped!
Didn't have any luck on discovering where this rogue IP was coming from. I ended up assigning the VM the IP address that the node-master was looking for. Sure enough all works fine.

Err: Node 127.0.0.1:6379 is not configured as a cluster node while creating Redis Cluster

I am creating Redis cluster by running following command
redis-trib.rb create --replicas 1 127.0.0.1:6379 127.0.0.1: 6380 127.0.0.1: 6381 127.0.0.1: 6382 127.0.0.1:6383 127.0.0.1:6384
I have already created 6 instances of Redis node running on the same server on different port i.e. on 6379, 6380, 6381, 6382, 6383, 6384 respectively.
Now while executing the above command I am getting error that Node 127.0.0.1:6379 is not configured as a cluster node.
I have also changed the configuration in redis.windows-service.conf file for following keys
cluster-enabled yes
appendonly yes
Windows service for all 6 noded is also up and running.
I found some discussion here https://groups.google.com/forum/#!topic/redis-db/7PCu4-pnt9s regarding similar type issue with no luck.
Is anyone has some idea what is the issue?
Finally, I got the solution by Myself by some troubleshooting steps. The problem was that all the Redis slots were not coverd. By deleting the log files and executing command to cover slot resolves the problem.
Helpful link : http://redis.io/topics/cluster-tutorial

Percona Xtradb Cluster nodes won't start

I setup percona_xtradb_cluster-56 with three nodes in the cluster. To start the first cluster, i use the following command and it starts just fine:
#/etc/init.d/mysql bootstrap-pxc
The other two nodes however fail to start when i start them normally using the command:
#/etc/init.d/mysql start
The error i am getting is "The server quit without updating the PID file". The error log contains this message:
Error in my_thread_global_end(): 1 threads didn't exit 150605 22:10:29
mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended.
The cluster nodes are running all Ubuntu 14.04. When i use percona-xtradb-cluster5.5, the cluster ann all the nodes run just fine as expected. But i need to use version 5.6 because i am also using GTID which is only available in version 5.6 and not supported in earlier versions.
I was following these two percona documentation to setup the cluster:
https://www.percona.com/doc/percona-xtradb-cluster/5.6/installation.html#installation
https://www.percona.com/doc/percona-xtradb-cluster/5.6/howtos/ubuntu_howto.html
Any insight or suggestions on how to resolve this issue would be highly appreciated.
The problem is related to memory, as "The Georgia" writes. There should be at least 500MB for default setup and bootstrapping. See here http://sysadm.pp.ua/linux/px-cluster.html

cant' replace dead cassandra node because it doesn't exist in gossip

One of the nodes in a cassandra cluster has died.
I'm using cassandra 2.0.7 throughout.
When I do a nodetool status this is what I see (real addresses have been replaced with fake 10 nets)
[root#beta-new:/opt] #nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.10.1.94 171.02 KB 256 49.4% fd2f76ae-8dcf-4e93-a37f-bf1e9088696e rack1
DN 10.10.1.98 ? 256 50.6% f2a48fc7-a362-43f5-9061-4bb3739fdeaf rack1
I tried to get the token ID for the down node by doing a nodetool ring command, grepping for the IP and doing a head -1 to get the initial one.
[root#beta-new:/opt] #nodetool ring | grep 10.10.1.98 | head -1
10.10.1.98 rack1 Down Normal ? 50.59% -9042969066862165996
I then started following this documentation on how to replace the node:
[http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_replace_node_t.html?scroll=task_ds_aks_15q_gk][1]
So I installed cassandra on a new node but did not start it.
Set the following options:
cluster_name: 'Jokefire Cluster'
seed_provider:
- seeds: "10.10.1.94"
listen_address: 10.10.1.94
endpoint_snitch: SimpleSnitch
And set the initial token of the new install as the token -1 of the node I'm trying to replace in cssandra.yaml:
initial_token: -9042969066862165995
And after making sure there was no data yet in:
/var/lib/cassandra
I started up the database:
[root#web2:/etc/alternatives/cassandrahome] #./bin/cassandra -f -Dcassandra.replace_address=10.10.1.98
The documentation I link to above says to use the replace_address directive on the command line rather than cassandra-env.sh if you have a tarball install (which we do) as opposed to a package install.
After I start it up, cassandra fails with the following message:
Exception encountered during startup: Cannot replace_address /10.10.10.98 because it doesn't exist in gossip
So I'm wondering at this point if I've missed any steps or if there is anything else I can try to replace this dead cassandra node?
Has the rest of your cluster been restarted since the node failure, by chance? Most gossip information does not survive a full restart, so you may genuinely not have gossip information for the down node.
This issue was reported as a bug CASSANDRA-8138, and the answer was:
I think I'd much rather say that the edge case of a node dying, and then a full cluster restart (rolling would still work) is just not supported, rather than make such invasive changes to support replacement under such strange and rare conditions. If that happens, it's time to assassinate the node and bootstrap another one.
So rather than replacing your node, you need to remove the failed node from the cluster and start up a new one. If using vnodes, it's quite straightforward.
Discover the node ID of the failed node (from another node in the cluster)
nodetool status | grep DN
And remove it from the cluster:
nodetool removenode (node ID)
Now you can clear out the data directory of the failed node, and bootstrap it as a brand-new one.
Some less known issues of Cassandra dead node replacement has been captured in below link based on my experience:
https://github.com/laxmikant99/cassandra-single-node-disater-recovery-lessons

Resources