In cloudera 5.13.0 services are not starting - hadoop

I have mistakenly deleted /var/log/* folder, due to that the services were not starting in Cloudera which were installed in that specific node. And the log files are not getting generated. And no clear error message available in cloudera manager. Can someone please suggest me how to proceed further.
Please find the below image for your understanding.
Thanks in advance.

You need to create empty folders like
sudo mkdir -p /var/log/cloudera-scm-agent
sudo mkdir -p /var/log/hadoop-hdfs
sudo mkdir -p /var/log/cloudera-scm-server
sudo mkdir -p /var/log/hadoop-mapreduce
etc.. based on your services selected in Cloudera Manager. Because Cloudera doesn't create the log directories(It just creates the file)
You can test it by doing a global search in Cloudera Manager with /var/log and you will find a lot of log directory names. Just created it. It should work

Related

Hadoop returns permission denied

I am trying to install hadoop (2.7) in cluster (two machines hmaster and hslave1). I installed hadoop in the folder /opt/hadoop/
I followed this tutorial but Iwhen I run the command start-dfs.sh, I got the following error about:
hmaster: starting namenode, logging to /opt/hadoop/logs/hadoop-hadoop-namenode-hmaster.out
hmaster: starting datanode, logging to /opt/hadoop/logs/hadoop-hadoop-datanode-hmaster.out
hslave1: mkdir: impossible to create the folder « /opt/hadoop\r »: Permission denied
hslave1: chown: impossible to reach « /opt/hadoop\r/logs »: no file or folder of this type
/logs/hadoop-hadoop-datanode-localhost.localdomain.out
I used the command chmod 777 for the folder hadoop in hslave but I still have this error.
Insted of using /opt/ use /usr/local/ if you get that permission issue again give the root permissions using chmod. I already configured hadoop 2.7 in 5 machines. Or else use "Sudo chown user:user /your log files directory".
Seems you have already gave master password less access to login slave.
Make sure you are logged in with username available on both servers.
(hadoop in your case, as tutorial you are following uses 'hadoop' user.)
you can edit the '/etc/sudoer' file using 'sudo' or directly type 'visudo' in the terminal and add the following permission for newly created user 'hadoop' :-
hadoop ALL = NOPASSWD: ALL
might it will resolved your issues.

Uninstall Vertica on multi-node cluster

I have a multi-node Vertica 7.0 cluster. I did some research on how to remove it. Based on the documentation, the steps are very simple and straight forward.
I just need to log in to each host in the cluster and remove the package: rpm -e package
Also if I want to delete the configuration file used with the installation, I can remove the directory: rm -rf /opt/vertica/
My question is, if I have 20 nodes in the cluster, do I really need to do that on each node? I know the installation of multi-node cluster is much easier cause we can install it without having to go to each node and install the rpm file.
How about uninstallation? What is the best practice to uninstall a multi-node cluster?
The way you found out is indeed the only way. Note that you would need to remove the data and catalog directories as well.
To make your life easier, ansible is an amazing tool. Once you define an ansible host file (basically the list of your vertica servers in a [vertica] section, ini-style), you can just run one command:
ansible vertica -mshell -a 'yum uninstall vertica'
ansible vertica -mshell -a 'rm -rf /opt/vertica'
and so on. It will automagically run on all servers of your cluster. You will probably need to play with the -s, -S or -k options to use the root user, but ansible will definitely make your life a lot easier.

unable to setup psuedo distributed hadoop cluster

I am using centos 7. Downloaded and untarred hadoop 2.4.0 and followed the instruction as per the link Hadoop 2.4.0 setup
Ran the following command.
./hdfs namenode -format
Got this error :
Error: Could not find or load main class org.apache.hadoop.hdfs.server.namenode.NameNode
I see a number of posts with the same error with no accepted answers and I have tried them all without any luck.
This error can occur if the necessary jarfiles are not readable by the user running the "./hdfs" command or are misplaced so that they can't be found by hadoop/libexec/hadoop-config.sh.
Check the permissions on the jarfiles under: hadoop-install/share/hadoop/*:
ls -l share/hadoop/*/*.jar
and if necessary, chmod them as the owner of the respective files to ensure they're readable. Something like chmod 644 should be sufficient to at least check if that fixes the initial problem. For the more permanent fix, you'll likely want to run the hadoop commands as the same user that owns all the files.
I followed the link Setup hadoop 2.4.0
and I was able to get over the error message.
Seems like the documentation on hadoop site is not complete.

org.apache.hadoop.hbase.NotServingRegionException: Region is not online: -ROOT-,,0 what is the reason behind for this error

Thanks for taking interesting in my Question :) when ever i fire query like scan,put, create for any table in hbase shell am getting following error. and hbase shell gives the result listing of tables and description of tables .... so would you please help me to clear out of this ?
And also can u please tell me the meaning of the structure -ROOT-,,0
About versions am using
HBase 0.92.1-cdh4.1.2
Hadoop 2.0.0-cdh4.1.2
ERROR: org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region is not online: -ROOT-,,0
I had the same error. Zookeeper was handled by HBase.
(It wasn't standalone!)
So a quick fix is:
$ hbase zkcli
zookeeper_cli> rmr /hbase/root-region-server
By clearing the zookeeper nodes hbase started working fine :) what exactly i followed was (it is not recommended and you should have your
HBase and ZK shut down first):
### shut down ZM and HBase
1) for each ZK node:
su // login as root
cd $ZOOKEEPER_HOME
cp data/myid myid // backup existing myid file to ZooKeeper's home folder
rm data/* -Rf
rm dadalog/* -Rf
mkdir -p data
mkdir -p datalog
cp myid data/myid // restore the myid backup so no need to recreate myid again
2) for each ZK node:
(start ZK )
3) finally
(start HBase)
By clearing data and datalog, you should have a very clean ZooKeeper.
Hope these help and good luck.
Thanks
My Error was ERROR: org.apache.hadoop.hbase.NotServingRegionException: Region ROLE,,1457743249518.221f6f7fdacacbe179674267f8d06575. is not online on ddtmwutelc3ml01.azure-dev.us164.corpintra.net,16020,1459486618702
at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2898)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:947)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2235)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Resolution is :
run below command
$ hbase zkcli
zookeeper_cli> rmr /hbase/root-region-server
then
Stop hbase and zk
backup mydir from cd /hadoop/zookeeper
deleter everything from cd /hadoop/zookeeper
restart zookeeper then hbase

Hadoop Single Node : Permission Denied

I just installed Hadoop single node but when i run it by logging on localhost it gives error that it cannot make changes to files as permission is denied?
Have you followed all the steps as suggested in: http://hadoop.apache.org/common/docs/current/single_node_setup.html ?
You may want to look at this : http://getsatisfaction.com/cloudera/topics/permission_denied_error_in_desktop
Also, some more information would definitely help.
You have not given necessary permissions.Make a different user other than root.Follow this tutorial to the point http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
It seems to be missing permissions for the user on the directory containing the files
make sure that the user you are logged on , is the owner of the Hadoop directory by running
ls -la command
if not the owner run the command chown -R hadoop user:group hadoop directory and it will work fine.
also you can follow the tutorial of michael noll
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

Resources