Exception : No alive nodes found in your cluster - laravel

I have an issue according to elasticsearch, when I am running this command php artisan index:ambassadors inside docker, it gives me this exception.
**Exception : No alive nodes found in your cluster**
Here is my output.
Exception : No alive nodes found in your cluster
412/4119 [▓▓░░░░░░░░░░░░░░░░░░░░░░░░░░] 10%Exception : No alive nodes found in your cluster
824/4119 [▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░░░░] 20%Exception : No alive nodes found in your cluster
1236/4119 [▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░] 30%Exception : No alive nodes found in your cluster
1648/4119 [▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░] 40%Exception : No alive nodes found in your cluster
2472/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░] 60%Exception : No alive nodes found in your cluster
2884/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░] 70%Exception : No alive nodes found in your cluster
3296/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░] 80%Exception : No alive nodes found in your cluster
3997/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░] 97%Exception : No alive nodes found in your cluster
4119/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 100%Exception : No alive nodes found in your cluster
Also I have an error message in my elasticsearch container logs.
Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
/usr/share/elasticsearch/config/log4j2.properties.
Is there anyone who faced this issue before?

Related

issue with adding a new member in etcd cluster

I have 3 node etcd cluster running on docker
Node1:
etcd-advertise-client-urls: "http://sensu-backend1:2379"
etcd-initial-advertise-peer-urls: "http://sensu-backend3:2380"
etcd-initial-cluster: "sensu-backend1=http://sensu-backend1:2380,sensu-backend2=http://sensu-backend2:2380,sensu-backend3=http://sensu-backend3:2380"
etcd-initial-cluster-state: "new" # new or existing
etcd-listen-client-urls: "http://0.0.0.0:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-name: "sensu-backend1"
Node2:
etcd-advertise-client-urls: "http://sensu-backend2:2379"
etcd-initial-advertise-peer-urls: "http://sensu-backend3:2380"
etcd-initial-cluster: "sensu-backend1=http://sensu-backend1:2380,sensu-backend2=http://sensu-backend2:2380,sensu-backend3=http://sensu-backend3:2380"
etcd-initial-cluster-state: "new" # new or existing
etcd-listen-client-urls: "http://0.0.0.0:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-name: "sensu-backend2"```
Node3:
etcd-advertise-client-urls: "http://sensu-backend3:2379"
etcd-initial-advertise-peer-urls: "http://sensu-backend3:2380"
etcd-initial-cluster: "sensu-backend1=http://sensu-backend1:2380,sensu-backend2=http://sensu-backend2:2380,sensu-backend3=http://sensu-backend3:2380"
etcd-initial-cluster-state: "new" # new or existing
etcd-listen-client-urls: "http://0.0.0.0:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-name: "sensu-backend3"
I am running each node as a docker service without persisting the etcd data directory.
When I start all the nodes together etcd forms the cluster.
If I delete one node and try to add as etcd-initial-cluster-state: "existing" then I get following error
{"component":"etcd","level":"fatal","msg":"tocommit(6264) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?","pkg":"raft","time":"2020-12-09T11:32:55Z"}
After stopping etcd, I deleted the node from cluster using etcdctl member remove . When I restart container with empty etcd data directory then I get cluster id mismatch error.
{"component":"backend","error":"error starting etcd: error validating peerURLs {ClusterID:4bccd6f485bb66f5 Members:[\u0026{ID:2ea5b7e4c09185e2 RaftAttributes:{PeerURLs:[http://sensu-backend1:2380]} Attributes:{Name:sensu-backend1 ClientURLs:[http://sensu-backend1:2379]}} \u0026{ID:9e83e7f64749072d RaftAttributes:{PeerURLs:[http://sensu-backend2:2380]} Attributes:{Name:sensu-backend2 ClientURLs:[http://sensu-backend2:2379]}}] RemovedMemberIDs:[]}: member count is unequal"}
Please help me on fixing the issue.
If you delete a node that was in a cluster, you should manually delete it from etcd cluster also i.e. by doing 'etcdctl remove '.
And member mismatch count error is because 'etcd-initial-cluster' still has all 3 entries of nodes, you need to remove that entry of deleted node from this field also in all containers.

How can I know which nodes in a cluster are actual master nodes?

I use ES 2.2.0. and have a cluster of nodes. I would like to know which node or nodes are actual master ones. How can I do that?
I tried the following ways:
curl http://my_computer:9200/_cluster/state?pretty
curl http://my_computer:9200/_nodes?pretty
and I was unable to find which node is master.
There is only ever one single master in a cluster, chosen among the set of master-eligible nodes.
You can either run the /_cat/master command or the /_cat/nodes command.
The former will yield something like this
% curl 'localhost:9200/_cat/master?v'
id ip node
Ntgn2DcuTjGuXlhKDUD4vA 192.168.56.30 Solarr
and the latter command will yield the list of nodes with the master column (m for short). Nodes with m are master-eligible nodes and the one with the * is the current master.
% curl 192.168.56.10:9200/_cat/nodes?v&h=id,ip,port,v,m
id ip port version m
pLSN 192.168.56.30 9300 2.2.0 m
k0zy 192.168.56.10 9300 2.2.0 m
6Tyi 192.168.56.20 9300 2.2.0 *
It isn't nodes that are primary, but shards. If you check out https://www.elastic.co/guide/en/elasticsearch/reference/2.2/cat-shards.html
You can try something like: http://my_computer:9200/_cat/shards?v
With respect to Elasticsearch 6.6, this is how you can get the id of the master_node
curl -X GET "192.168.0.1:9200/_cluster/state/master_node?pretty"
{
"cluster_name" : "logbox",
"compressed_size_in_bytes" : 11150,
"cluster_uuid" : "eSpyTgXbTJirTjWtPW_HYQ",
"master_node" : "R8Gn9Km0T92H9D7TXGpX4k"
}

PredictionIO elasticsearch demo causting error

I ran "pio app new tapster" and cannot get past this error. My elasticsearch.yml file appears. My network is correctly set. But, don't know how to get based this.
[WARN] [transport] [Portal] node [#transport#-1][Jeremys-MacBook-Pro.local][inet[localhost/127.0.0.1:9300]] not part of the cluster Cluster [elasticsearch], ignoring...
Exception in thread "main" org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:278)

Cassandra adding node : opscenter agent not connected

I am using Datastax community edition in two windows PCs(64 bit and 32 bit respectively). After setting the initial configuration in cassandra.yaml, in the Opscenter web interface its showing that "1 of 2 agents connected" and recommending to install opscenter agent.Node 1(ip: X.X.X.X) Configuration:Cluster name : Test Centerseeds : Y.Y.Y.Ylisten address :rpc_address : 0.0.0.0endpoint_snitch: SimpleSnitchnum_tokens: 256Node 2(ip: Y.Y.Y.Y) Configuration:Cluster name : Test Centerseeds : X.X.X.Xlisten address :rpc_address : 0.0.0.0endpoint_snitch: SimpleSnitchnum_tokens: 256By default auto_bootstrap attribute was absent so I didn't add that and as per instruction I first stopped the services and after changing this setting I started them.Q1. Any settings I'm missing ?Thanks for kindly help.Edited : From X.X.X.X node, the status of Y.Y.Y.Y node
You need to configure the datastax-agents so they know what machine OpsCenter is running on.
To do this you will need to edit the following line in address.yaml located in C:\Program Files\DataStax Community\opscenter\agent\conf.
stomp_interface:
If X.X.X.X is your opscenterd machine:
set stomp_interface: X.X.X.X for all nodes.
you have made a mistake with the seeds. If these 2 nodes are part of the same cluster (and you've indicated that they both have same name "Test Center", then the seeds should be the same, not different. Set seeds: Y.Y.Y.Y in both nodes. Shutdown both nodes. Start node 1, after it is up then start Node 2. Node 2 will get its settings from the seed (Node 1).
listen_address: shouldn't be blank. set it to the ip address of the interface that the node will be listening on. I am assuming these are physical machines.

rerun a black list Hadoop node without stop job running

Is there any way for unblacklisting a Hadoop node when the job is running ?
I tired restarting data node but it didn't work.
this happens after four time failure at slave1 with this error:
Error initializing attempt_201311231755_0030_m_000000_0:
java.io.IOException: Expecting a line not the end of stream
at org.apache.hadoop.fs.DF.parseExecResult(DF.java:109)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:179)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:306)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:108)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:776)
at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:1664)
at org.apache.hadoop.mapred.TaskTracker.access$1200(TaskTracker.java:97)
at org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:1629)

Resources