issue with adding a new member in etcd cluster - etcd

I have 3 node etcd cluster running on docker
Node1:
etcd-advertise-client-urls: "http://sensu-backend1:2379"
etcd-initial-advertise-peer-urls: "http://sensu-backend3:2380"
etcd-initial-cluster: "sensu-backend1=http://sensu-backend1:2380,sensu-backend2=http://sensu-backend2:2380,sensu-backend3=http://sensu-backend3:2380"
etcd-initial-cluster-state: "new" # new or existing
etcd-listen-client-urls: "http://0.0.0.0:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-name: "sensu-backend1"
Node2:
etcd-advertise-client-urls: "http://sensu-backend2:2379"
etcd-initial-advertise-peer-urls: "http://sensu-backend3:2380"
etcd-initial-cluster: "sensu-backend1=http://sensu-backend1:2380,sensu-backend2=http://sensu-backend2:2380,sensu-backend3=http://sensu-backend3:2380"
etcd-initial-cluster-state: "new" # new or existing
etcd-listen-client-urls: "http://0.0.0.0:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-name: "sensu-backend2"```
Node3:
etcd-advertise-client-urls: "http://sensu-backend3:2379"
etcd-initial-advertise-peer-urls: "http://sensu-backend3:2380"
etcd-initial-cluster: "sensu-backend1=http://sensu-backend1:2380,sensu-backend2=http://sensu-backend2:2380,sensu-backend3=http://sensu-backend3:2380"
etcd-initial-cluster-state: "new" # new or existing
etcd-listen-client-urls: "http://0.0.0.0:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-name: "sensu-backend3"
I am running each node as a docker service without persisting the etcd data directory.
When I start all the nodes together etcd forms the cluster.
If I delete one node and try to add as etcd-initial-cluster-state: "existing" then I get following error
{"component":"etcd","level":"fatal","msg":"tocommit(6264) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?","pkg":"raft","time":"2020-12-09T11:32:55Z"}
After stopping etcd, I deleted the node from cluster using etcdctl member remove . When I restart container with empty etcd data directory then I get cluster id mismatch error.
{"component":"backend","error":"error starting etcd: error validating peerURLs {ClusterID:4bccd6f485bb66f5 Members:[\u0026{ID:2ea5b7e4c09185e2 RaftAttributes:{PeerURLs:[http://sensu-backend1:2380]} Attributes:{Name:sensu-backend1 ClientURLs:[http://sensu-backend1:2379]}} \u0026{ID:9e83e7f64749072d RaftAttributes:{PeerURLs:[http://sensu-backend2:2380]} Attributes:{Name:sensu-backend2 ClientURLs:[http://sensu-backend2:2379]}}] RemovedMemberIDs:[]}: member count is unequal"}
Please help me on fixing the issue.

If you delete a node that was in a cluster, you should manually delete it from etcd cluster also i.e. by doing 'etcdctl remove '.
And member mismatch count error is because 'etcd-initial-cluster' still has all 3 entries of nodes, you need to remove that entry of deleted node from this field also in all containers.

Related

etcd v2: etcd-server is healthy but etcd-events is not joining ("cluster ID mismatch" and "unmatched member while checking PeerURLs" errors)

I have a legacy Kubernetes cluster running etcd v2 with 3 masters (etcd-a, etcd-b, etcd-c). We attempted an upgrade to etcd v3 but this broken the first master (etcd-a) and it was no longer able to join the cluster. After some time I was able to restore it:
removed etcd-a from etcd cluster with etcdctl member rm
added a new etcd-a1 with a clean state and added to the cluster etcdctl member add
started kubelet with ETCD_INITIAL_CLUSTER_STATE set to existing, then started protokube. At this point the master is able to join the cluster.
At the beginning I thought the cluster was healthy:
/ # etcdctl member list
a4***b2: name=etcd-c peerURLs=http://etcd-c.internal.mydomain.com:2380 clientURLs=http://etcd-c.internal.mydomain.com:4001
cf***97: name=etcd-a1 peerURLs=http://etcd-a1.internal.mydomain.com:2380 clientURLs=http://etcd-a1.internal.mydomain.com:4001
d3***59: name=etcd-b peerURLs=http://etcd-b.internal.mydomain.com:2380 clientURLs=http://etcd-b.internal.mydomain.com:4001
/ # etcdctl cluster-health
member a4***b2 is healthy: got healthy result from http://etcd-c.internal.mydomain.com:4001
member cf***97 is healthy: got healthy result from http://etcd-a1.internal.mydomain.com:4001
member d3***59 is healthy: got healthy result from http://etcd-b.internal.mydomain.com:4001
cluster is healthy
Yet the status of etcd-events is not great. etcd-events for a1 is not running
etcd-server-events-ip-a1 0/1 CrashLoopBackOff 430
etcd-server-events-ip-b 1/1 Running 3
etcd-server-events-ip-c 1/1 Running 0
Logs from etcd-events-a1:
flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=http://etcd-events-a1.internal.mydomain.com:4002
flags: recognized and used environment variable ETCD_DATA_DIR=/var/etcd/data-events
flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=http://etcd-events-a1.internal.mydomain.com:2381
flags: recognized and used environment variable ETCD_INITIAL_CLUSTER=etcd-events-a1=http://etcd-events-a1.internal.mydomain.com:2381,etcd-events-b=http://etcd-events-b.internal.mydomain.com:2381,etcd-events-c=http://etcd-events-c.internal.mydomain.com:2381
flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_STATE=existing
flags: recognized and used environment variable ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-token-etcd-events
flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:4002
flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2381
flags: recognized and used environment variable ETCD_NAME=etcd-events-a1
etcdmain: etcd Version: 2.2.1
etcdmain: Git SHA: 75f8282
etcdmain: Go Version: go1.5.1
etcdmain: Go OS/Arch: linux/amd64
etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
etcdmain: the server is already initialized as member before, starting as etcd member...
etcdmain: listening for peers on http://0.0.0.0:2381
etcdmain: listening for client requests on http://0.0.0.0:4002
netutil: resolving etcd-events-b.internal.mydomain.com:2381 to 10.15.***:2381
netutil: resolving etcd-events-a1.internal.mydomain.com:2381 to 10.15.***:2381
etcdmain: stopping listening for client requests on http://0.0.0.0:4002
etcdmain: stopping listening for peers on http://0.0.0.0:2381
etcdmain: error validating peerURLs {ClusterID:5a***b3 Members:[&{ID:a7***32 RaftAttributes:{PeerURLs:[http://etcd-events-b.internal.mydomain.com:2381]} Attributes:{Name:etcd-events-b ClientURLs:[http://etcd-events-b.internal.mydomain.com:4002]}} &{ID:cc***b3 RaftAttributes:{PeerURLs:[https://etcd-events-a.internal.mydomain.com:2381]} Attributes:{Name:etcd-events-a ClientURLs:[https://etcd-events-a.internal.mydomain.com:4002]}} &{ID:7f***2ca RaftAttributes:{PeerURLs:[http://etcd-events-c.internal.mydomain.com:2381]} Attributes:{Name:etcd-events-c ClientURLs:[http://etcd-events-c.internal.mydomain.com:4002]}}] RemovedMemberIDs:[]}: unmatched member while checking PeerURLs
# restart
...
etcdserver: restarting member eb***3a in cluster 96***07 at commit index 3
raft: eb***a3a became follower at term 12407
raft: newRaft eb***3a [peers: [], term: 12407, commit: 3, applied: 0, lastindex: 3, lastterm: 1]
etcdserver: starting server... [version: 2.2.1, cluster version: to_be_decided]
etcdserver: added local member eb***3a [http://etcd-events-a1.internal.mydomain.com:2381] to cluster 96***07
etcdserver: added member 7f***ca [http://etcd-events-c.internal.mydomain.com:2381] to cluster 96***07
rafthttp: request sent was ignored (cluster ID mismatch: remote[7f***ca]=5a***b3, local=96***07)
rafthttp: request sent was ignored (cluster ID mismatch: remote[7f***ca]=5a***3, local=96***07)
rafthttp: failed to dial 7f***ca on stream Message (cluster ID mismatch)
rafthttp: failed to dial 7f***ca on stream MsgApp v2 (cluster ID mismatch)
etcdserver: added member a7***32 [http://etcd-events-b.internal.mydomain.com:2381] to cluster 96***07
rafthttp: request sent was ignored (cluster ID mismatch: remote[a7***32]=5a***b3, local=96***07)
rafthttp: failed to dial a7***32 on stream MsgApp v2 (cluster ID mismatch)
...
rafthttp: request sent was ignored (cluster ID mismatch: remote[a7***32]=5a***b3, local=96***07)
osutil: received terminated signal, shutting down...
etcdserver: aborting publish because server is stopped
Logs from etcd-events-b:
rafthttp: streaming request ignored (cluster ID mismatch got 96***07 want 5a***b3)
rafthttp: the connection to peer cc***b3 is unhealthy
Logs from etcd-events-c:
etcdserver: failed to reach the peerURL(https://etcd-events-a.internal.mydomain.com:2381) of member cc***b3 (Get https://etcd-events-a.internal.mydomain.com:2381/version: dial tcp 10.15.131.7:2381: i/o timeout)
etcdserver: cannot get the version of member cc***b3 (Get https://etcd-events-a.internal.mydomain.com:2381/version: dial tcp 10.15.131.7:2381: i/o timeout)
From the log I saw two problems:
etcd-events on 1a seems to ignore the existing cluster (then IDs doesn't match).
the other nodes (b and c) still somehow remembers the removed old node a.
I'm short of ideas on how to fix this. Any suggestion?
Thanks!
If you tried to upgrade etcd2 and did not restart all the masters at the same time, you will fail the upgrade.
Make sure you read through https://kops.sigs.k8s.io/etcd3-migration/
I also strongly recommend using the latest possible version of kOps as there are quite a few migration bugs fixed along the way.
There may be multiple reasons why the cluster ID changed, but if I remember correctly, replacing members like that was never really supported and with etcd2 your options are limited. Trying to get to etcd-manager and etcdv3 may be the best way to get your cluster in a working state again.

Exception : No alive nodes found in your cluster

I have an issue according to elasticsearch, when I am running this command php artisan index:ambassadors inside docker, it gives me this exception.
**Exception : No alive nodes found in your cluster**
Here is my output.
Exception : No alive nodes found in your cluster
412/4119 [▓▓░░░░░░░░░░░░░░░░░░░░░░░░░░] 10%Exception : No alive nodes found in your cluster
824/4119 [▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░░░░] 20%Exception : No alive nodes found in your cluster
1236/4119 [▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░] 30%Exception : No alive nodes found in your cluster
1648/4119 [▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░] 40%Exception : No alive nodes found in your cluster
2472/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░] 60%Exception : No alive nodes found in your cluster
2884/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░] 70%Exception : No alive nodes found in your cluster
3296/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░] 80%Exception : No alive nodes found in your cluster
3997/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░] 97%Exception : No alive nodes found in your cluster
4119/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 100%Exception : No alive nodes found in your cluster
Also I have an error message in my elasticsearch container logs.
Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
/usr/share/elasticsearch/config/log4j2.properties.
Is there anyone who faced this issue before?

Recovering from Consul "No Cluster leader" state

I have:
one mesos-master in which I configured a consul server;
one mesos-slave in which I configure consul client, and;
one bootstrap server for consul.
When I hit start I am seeing the following error:
2016/04/21 19:31:31 [ERR] agent: failed to sync remote state: rpc error: No cluster leader
2016/04/21 19:31:44 [ERR] agent: coordinate update error: rpc error: No cluster leader
How do I recover from this state?
Did you look at the Consul docs ?
It looks like you have performed a ungraceful stop and now need to clean your raft/peers.json file by removing all entries there to perform an outage recovery. See the above link for more details.
As of Consul 0.7 things work differently from Keyan P's answer. raft/peers.json (in the Consul data dir) has become a manual recovery mechanism. It doesn't exist unless you create it, and then when Consul starts it loads the file and deletes it from the filesystem so it won't be read on future starts. There are instructions in raft/peers.info. Note that if you delete raft/peers.info it won't read raft/peers.json but it will delete it anyway, and it will recreate raft/peers.info. The log will indicate when it's reading and deleting the file separately.
Assuming you've already tried the bootstrap or bootstrap_expect settings, that file might help. The Outage Recovery guide in Keyan P's answer is a helpful link. You create raft/peers.json in the data dir and start Consul, and the log should indicate that it's reading/deleting the file and then it should say something like "cluster leadership acquired". The file contents are:
[ { "id": "<node-id>", "address": "<node-ip>:8300", "non_voter": false } ]
where <node-id> can be found in the node-id file in the data dir.
If u got raft version more than 2:
[
{
"id": "e3a30829-9849-bad7-32bc-11be85a49200",
"address": "10.88.0.59:8300",
"non_voter": false
},
{
"id": "326d7d5c-1c78-7d38-a306-e65988d5e9a3",
"address": "10.88.0.45:8300",
"non_voter": false
},
{
"id": "a8d60750-4b33-99d7-1185-b3c6d7458d4f",
"address": "10.233.103.119",
"non_voter": false
}
]
In my case I had 2 worker nodes in the k8s cluster, after adding another node the consul servers could elect a master and everything is up and running.
I will update what I did:
Little Background: We scaled down the AWS Autoscaling so lost the leader. But we had one server still running but without any leader.
What I did was:
I scaled up to 3 servers(don't make 2-4)
stopped consul in all 3 servers.sudo service consul stop(you can do status/stop/start)
created peers.json file and put it in old server(/opt/consul/data/raft)
start the 3 servers (peers.json should be placed on 1 server only)
For other 2 servers join it to leader using consul join 10.201.8.XXX
check peers are connected to leader using consul operator raft list-peers
Sample peers.json file
[
{
"id": "306efa34-1c9c-acff-1226-538vvvvvv",
"address": "10.201.n.vvv:8300",
"non_voter": false
},
{
"id": "dbeeffce-c93e-8678-de97-b7",
"address": "10.201.X.XXX:8300",
"non_voter": false
},
{
"id": "62d77513-e016-946b-e9bf-0149",
"address": "10.201.X.XXX:8300",
"non_voter": false
}
]
These id you can get from each server in /opt/consul/data/
[root#ip-10-20 data]# ls
checkpoint-signature node-id raft serf
[root#ip-10-1 data]# cat node-id
Some useful commands:
consul members
curl http://ip:8500/v1/status/peers
curl http://ip:8500/v1/status/leader
consul operator raft list-peers
cd opt/consul/data/raft/
consul info
sudo service consul status
consul catalog services
You may also ensure that bootstrap parameter is set in your Consul configuration file config.json on the first node:
# /etc/consul/config.json
{
"bootstrap": true,
...
}
or start the consul agent with the -bootstrap=1 option as described in the official Failure of a single server cluster Consul documentation.

PredictionIO elasticsearch demo causting error

I ran "pio app new tapster" and cannot get past this error. My elasticsearch.yml file appears. My network is correctly set. But, don't know how to get based this.
[WARN] [transport] [Portal] node [#transport#-1][Jeremys-MacBook-Pro.local][inet[localhost/127.0.0.1:9300]] not part of the cluster Cluster [elasticsearch], ignoring...
Exception in thread "main" org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:278)

rerun a black list Hadoop node without stop job running

Is there any way for unblacklisting a Hadoop node when the job is running ?
I tired restarting data node but it didn't work.
this happens after four time failure at slave1 with this error:
Error initializing attempt_201311231755_0030_m_000000_0:
java.io.IOException: Expecting a line not the end of stream
at org.apache.hadoop.fs.DF.parseExecResult(DF.java:109)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:179)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:306)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:108)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:776)
at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:1664)
at org.apache.hadoop.mapred.TaskTracker.access$1200(TaskTracker.java:97)
at org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:1629)

Resources