I ran "pio app new tapster" and cannot get past this error. My elasticsearch.yml file appears. My network is correctly set. But, don't know how to get based this.
[WARN] [transport] [Portal] node [#transport#-1][Jeremys-MacBook-Pro.local][inet[localhost/127.0.0.1:9300]] not part of the cluster Cluster [elasticsearch], ignoring...
Exception in thread "main" org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:278)
Related
I have an issue according to elasticsearch, when I am running this command php artisan index:ambassadors inside docker, it gives me this exception.
**Exception : No alive nodes found in your cluster**
Here is my output.
Exception : No alive nodes found in your cluster
412/4119 [▓▓░░░░░░░░░░░░░░░░░░░░░░░░░░] 10%Exception : No alive nodes found in your cluster
824/4119 [▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░░░░] 20%Exception : No alive nodes found in your cluster
1236/4119 [▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░░░░] 30%Exception : No alive nodes found in your cluster
1648/4119 [▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░░░░░░] 40%Exception : No alive nodes found in your cluster
2472/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░░░░] 60%Exception : No alive nodes found in your cluster
2884/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░░░░] 70%Exception : No alive nodes found in your cluster
3296/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░] 80%Exception : No alive nodes found in your cluster
3997/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░] 97%Exception : No alive nodes found in your cluster
4119/4119 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 100%Exception : No alive nodes found in your cluster
Also I have an error message in my elasticsearch container logs.
Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
/usr/share/elasticsearch/config/log4j2.properties.
Is there anyone who faced this issue before?
I have 3 node etcd cluster running on docker
Node1:
etcd-advertise-client-urls: "http://sensu-backend1:2379"
etcd-initial-advertise-peer-urls: "http://sensu-backend3:2380"
etcd-initial-cluster: "sensu-backend1=http://sensu-backend1:2380,sensu-backend2=http://sensu-backend2:2380,sensu-backend3=http://sensu-backend3:2380"
etcd-initial-cluster-state: "new" # new or existing
etcd-listen-client-urls: "http://0.0.0.0:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-name: "sensu-backend1"
Node2:
etcd-advertise-client-urls: "http://sensu-backend2:2379"
etcd-initial-advertise-peer-urls: "http://sensu-backend3:2380"
etcd-initial-cluster: "sensu-backend1=http://sensu-backend1:2380,sensu-backend2=http://sensu-backend2:2380,sensu-backend3=http://sensu-backend3:2380"
etcd-initial-cluster-state: "new" # new or existing
etcd-listen-client-urls: "http://0.0.0.0:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-name: "sensu-backend2"```
Node3:
etcd-advertise-client-urls: "http://sensu-backend3:2379"
etcd-initial-advertise-peer-urls: "http://sensu-backend3:2380"
etcd-initial-cluster: "sensu-backend1=http://sensu-backend1:2380,sensu-backend2=http://sensu-backend2:2380,sensu-backend3=http://sensu-backend3:2380"
etcd-initial-cluster-state: "new" # new or existing
etcd-listen-client-urls: "http://0.0.0.0:2379"
etcd-listen-peer-urls: "http://0.0.0.0:2380"
etcd-name: "sensu-backend3"
I am running each node as a docker service without persisting the etcd data directory.
When I start all the nodes together etcd forms the cluster.
If I delete one node and try to add as etcd-initial-cluster-state: "existing" then I get following error
{"component":"etcd","level":"fatal","msg":"tocommit(6264) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?","pkg":"raft","time":"2020-12-09T11:32:55Z"}
After stopping etcd, I deleted the node from cluster using etcdctl member remove . When I restart container with empty etcd data directory then I get cluster id mismatch error.
{"component":"backend","error":"error starting etcd: error validating peerURLs {ClusterID:4bccd6f485bb66f5 Members:[\u0026{ID:2ea5b7e4c09185e2 RaftAttributes:{PeerURLs:[http://sensu-backend1:2380]} Attributes:{Name:sensu-backend1 ClientURLs:[http://sensu-backend1:2379]}} \u0026{ID:9e83e7f64749072d RaftAttributes:{PeerURLs:[http://sensu-backend2:2380]} Attributes:{Name:sensu-backend2 ClientURLs:[http://sensu-backend2:2379]}}] RemovedMemberIDs:[]}: member count is unequal"}
Please help me on fixing the issue.
If you delete a node that was in a cluster, you should manually delete it from etcd cluster also i.e. by doing 'etcdctl remove '.
And member mismatch count error is because 'etcd-initial-cluster' still has all 3 entries of nodes, you need to remove that entry of deleted node from this field also in all containers.
I have yarn cluster with spark(1.6.1), hdfs and hive(2.1). My workflows worked fine for few months till this day (without any changes in code / on environments). I started to get errors like this:
org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 21
Serialization trace:
outputFileFormatClass (org.apache.hadoop.hive.ql.plan.PartitionDesc)
aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
invertedWorkGraph (org.apache.hadoop.hive.ql.plan.SparkWork)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:656)
at org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:238)
at org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:226)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:745)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:113)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:139)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:776)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:131)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:694)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:672)
at org.apache.hadoop.hive.ql.exec.spark.KryoSerializer.deserialize(KryoSerializer.java:49)
at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient$JobStatusJob.call(RemoteHiveSparkClient.java:318)
at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:366)
at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:335)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Using hive i can do simple selects, but every other operation which needs spark ends with Error: Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=3) in console, and error above in yarn logs.
Now my every hive database is paralyzed (i have few). I was trying to solve this problem whole day, but couldnt do antything (hive restart, yarn node's restarts, changing yarn master).
What do you think causes the problem and how can it be solved?
I figured it out.
After restarting hive-server2 for small period of time instead of getting error: org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 26 i got error: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: org.apache.hadoop.hive.ql.io.RCFileOutputFormat. With second form it was obvious, that spark executed on node's didn't have some jars on classpath. I don't know the reason, why spark in one moment was unable to load these jars, but after copying them manually to his lib folder on every node and restarting node everything went back to normal.
I'm trying to setup PredictionIO locally following these instructions.
Unfortunately I'm unable to get it to work. When I try to install a template or run "pio status" I get the error saying that PredictionIO is unable to connect to elasticsearch:
[ERROR] [Console$] Unable to connect to all storage backends successfully. The following shows the error message from the storage backend.
[ERROR] [Console$] None of the configured nodes are available: [] (org.elasticsearch.client.transport.NoNodeAvailableException)
[ERROR] [Console$] Dumping configuration of initialized storage backend sources. Please make sure they are correct.
[ERROR] [Console$] Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /Users/tomasz/Documents/PredictionIO/apache-predictionio-0.10.0-incubating/PredictionIO-0.10.0-incubating/vendors/elasticsearch-1.4.4, HOSTS -> localhost, PORTS -> 9200, TYPE -> elasticsearch
My pio-env.sh file is setup like so:
PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS
PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=localhost
PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9200
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=$PIO_HOME/vendors/elasticsearch-1.4.4
PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs
PIO_STORAGE_SOURCES_LOCALFS_PATH=$PIO_FS_BASEDIR/models
PIO_STORAGE_SOURCES_HBASE_TYPE=hbase
PIO_STORAGE_SOURCES_HBASE_HOME=$PIO_HOME/vendors/hbase-1.0.0
Any help would be appreciated.
I encountered this when I was using Elasticsearch 5.4.x on PredictionIO version 0.11.0-incubating. It worked when I used Elasticsearch 1.7.x
Is there any way for unblacklisting a Hadoop node when the job is running ?
I tired restarting data node but it didn't work.
this happens after four time failure at slave1 with this error:
Error initializing attempt_201311231755_0030_m_000000_0:
java.io.IOException: Expecting a line not the end of stream
at org.apache.hadoop.fs.DF.parseExecResult(DF.java:109)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:179)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:306)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:108)
at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:776)
at org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:1664)
at org.apache.hadoop.mapred.TaskTracker.access$1200(TaskTracker.java:97)
at org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:1629)