I need your help.I have a question with hadoop clustors starting.the error :
the error :Command 'ZkStartPreservingDatastore' failed for service 'zookeeper1'
please help me ,thank you.
Hope this helps. I've encountered the same issue on our production CDH4 cluster. I've tried to restart the service manually, and what I've noticed in the logs was the following error message:
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#193] - Too many connections from /10.0.2.133 - max is 50
So, I've edited /etc/zookeeper/conf/zoo.cfg and changed the maxClientCnxns to 500. After that I've manged to restart the zookeeper, and cluster came back.
Related
I am working on a project Openstack in linux ubuntu 20.04. I want to create a cluster hadoop with one master-node and three worker-nodes and i have a problem with a cluster that doesn't work.
Status ERROR:
Creating cluster failed for the following reason(s): Failed to create trust Error ID: ef5e8b0a-8e6d-4878-bebb-f37f4fa50a88, Failed to create trust Error ID: 43157255-86af-4773-96c1-a07ca7ac66ed
links: https://docs.openstack.org/devstack/latest/
Can you guys advise me about these errors. Is there anything to worry about?
I've been trying to run both zookeeper and kafka 2.13 on my local windows machine. I have modified the server properties to point to c:/kafka/kafka-logs and zookeeper data to point to c:/kafka/zookeeper-data.
The zookeeper starts without any issues but when I attempt to start kafka with
.\bin\windows\kafka-server-start.bat .\config\server.properties
I get the below error saying AccessDeniedException. I have already tried the below:
Deleting the kafka-logs and zookeeper-data folders and running both zookeeper and kafka again - I still run into the error if I do that
Creating the kafka-logs folder before running kafka - I still get the access denied exception
Running the command prompt as administrator before typing the commands - does not work
Could anyone give some suggestions on how to fix this?
Thank you
what is the kafka version you are using?.
I has this issue in 3.0.0 . I downgraded to 2.8.1 and the issue is resolved.
I think it is something related to kafka.
I reproduced the same issue with Kafka 3.0. Downgrading to 2.8.1 will help.
I have a cluster to use Hadoop (one master that works as namenode and datanode, and two slaves). I saw in the log files these messages of error:
hadoop-hduser-datanode-master.log file:
2017-05-15 13:02:55,303 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() exception:
java.io.IOException: TuberÃa rota
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:570)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:739)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:527)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:239)
at java.lang.Thread.run(Thread.java:748)
That happened only in the master node, after a while of inactivity. Fifteen minutes before, I ran a wordcount example successfully.
The OS in each node is Ubuntu 16.04. The cluster was created using VirtualBox.
Could you help me, please?
I followed this link:
https://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/
to configure some parameters about the memory and my problem was resolved!
Note: in some posts I read that this error could be for lack of disk space (non in my case) and in other the reason was the SO's version (they recommended downgrade Ubuntu 16.04 to 14.04).
I am running Hadoop 2.6.0 on cdh5.4.2 in VM. After unexpected power cut I started my VM and found hue is not working, its not started properly and given the error
I restarted HUE using below command:
sudo service hue stop
sudo service hue start
But no use. I was not able to run hive/pig/sqoop. Please help me what to do to fix this error.
Thanks in advance.
Launching using spark-ec2 script results in:
Setting up ganglia RSYNC'ing /etc/ganglia to slaves... <...>
Shutting down GANGLIA gmond: [FAILED]
Starting GANGLIA gmond: [ OK ]
Shutting down GANGLIA gmond: [FAILED]
Starting GANGLIA gmond: [ OK ]
Connection to <...> closed. <...> Stopping httpd:
[FAILED] Starting httpd: httpd: Syntax error on line 199 of
/etc/httpd/conf/httpd.conf: Cannot load modules/libphp-5.5.so into
server: /etc/httpd/modules/libphp-5.5.so: cannot open shared object
file: No such file or directory
[FAILED] [timing]
ganglia setup: 00h 00m 03s Connection to <...> closed.
Spark standalone cluster started at <...>:8080 Ganglia started at
<...>:5080/ganglia
Done!
However, when I netstat, there is no 5080 port listened on.
Is this related to the above error with httpd or it's something else?
EDIT:
So the issue is found (see the answer below), and the fix can be applied locally on the instance, after which Ganglia works fine. However the question is how to fix this issue in the root, so that spark-ec2 script can start Ganglia normally without intervention.
The fact that ganglia is not available is related to these errors - ganglia is php application and it won't run without php module for apache.
Which version of spark you are using to start cluster?
It is wierd error - these file should be present in AMI image.
Just traced the error: /etc/httpd/conf/httpd.conf is trying to load libphp-5.5 library while modules/ contains libphp-5.6 version...
Changing httpd.conf fixes the issue, however I'd be good to know a permanent fix within spark-ec2 script
This is because httpd fails to launch. As you have noted httpd.conf is trying to load modules and failing. You can reproduce the problem via apachectl start and examine exactly what modules are failing to load.
In my case there was one involving "auth" and "core". The last four (maybe five) listed will also fail to load. I did not encounter anything related to PHP so maybe our cases our different. Anyway the hacky solution is to comment out the problems. I did so and am running Ganglia without issue.