Hue configuration error -/etc/hue/conf.empty - Potential misconfiguration detected - hadoop

Hi Experts,
I'm newbie to Hadoop , linux environment and Cloudera. I installed cloudera vm 5.7 on my machine and imported mysql data to hdfs using SQOOP. I'm trying to execute to some queries against this data using impala. So, I tried launching HUE. When I launched I could see there is some misconfiguration error.
Error:
Potential misconfiguration detected. Fix and restart Hue.
Steps I have taken to troubleshoot this issue
1)I restarted HUE using below command:
sudo service hue stop
sudo service hue start
2) I tried looking at following directory file ./etc/hue - I could see there are two config folder. One is config and other on config.empty. I couldn't figure out the problem.
But Still I'm facing the same issue.

check out! your internet access from docker/VM, and after lots of messing around trying to figure out why the vmWare Bridge adapter wasn't working, I found my problem was docker. So you have to increase docker memory from UI or command ,mine was 2 I increased to 8 but 4 is ok

stop hue :
sudo service hue stop
restart HBASE :
sudo service hbase-thrift stop;
sudo service hbase-thrift start;
Restart Hive :
sudo service hive-server2 stop
sudo service hive-server2 start
start hue
sudo service hue start
Open, http://quickstart.cloudera:8888/about/ : it should work like a charm💫

Related

quickstart hue ui potential misconfiguration detected

I need help with Hue quickstart, i'm a beginner and i'm facing an issue with opening hue ui.
Configuration files located in /etc/hue/conf.empty
Potential misconfiguration detected. Fix and restart Hue.
hadoop.hdfs_clusters.default.webhdfs_url Current value: http://localhost:50070/webhdfs/v1
Failed to create temporary file "/tmp/hue_config_validation.9845984781315522608"
Hive Editor The application won't work without a running HiveServer2.
Here's a snapshot of error
Note
i tried some commands for restarting hue service but no use
sudo service hue stop
sudo service hbase-thrift stop
sudo service hbase-thrift start
sudo service hive-server2 stop
sudo service hive-server2 start
sudo service hue start

Unable to restart Hue in EMR

I am unable to restart Hue in AWS EMR Hadoop cluster.
I have modified hue.ini file and wanted to restart hue for the changes to apply.When I ran "service hue restart", It is giving "command not found" error. I can understand that this must be because hue is not added to the environment path. However, when I run bin/hue, it doesn't take restart as an input. Is there a way to restart hue?
I am using Hue 3.7.1-amzn-7, emr-4.8.4 and Amazon 2.7.3 Hadoop distribution.
Thanks in Advance.
The restart process depends on the EMR AMI version you are using.
On EMR 4.x.x & 5.x.x AMI's ,
Service management is handled by upstart, and not the traditional SysVInit scripts. So, the error like "Command not found" is expected. Services can be queried using the upstart commands found in upstart cookbook
List of services on EMR:
grep -ir "env DAEMON=" /etc/init/ | cut -d"\"" -f2
hadoop-yarn-resourcemanager
oozie
hadoop-hdfs-namenode
hive-hcatalog-server
hadoop-mapreduce-historyserver
hue
hadoop-kms
hadoop-yarn-proxyserver
hadoop-httpfs
hive-server2
hadoop-yarn-timelineserver
Example commands to stop/start hue:
status hue
sudo stop hue
sudo start hue
sudo reload hue
On EMR 3.x.x AMI's ,the SysVInit commands that you are trying to use service hue restart might work.

Hue misconfiguration error

I am running Hadoop 2.6.0 on cdh5.4.2 in VM. After unexpected power cut I started my VM and found hue is not working, its not started properly and given the error
I restarted HUE using below command:
sudo service hue stop
sudo service hue start
But no use. I was not able to run hive/pig/sqoop. Please help me what to do to fix this error.
Thanks in advance.

How to restart yarn on AWS EMR

I am using Hadoop 2.6.0 (emr-4.2.0 image). I have made some changes in yarn-site.xml and want to restart yarn to bring the changes into effect.
Is there a command using which I can do this?
Edit (10/26/2017): A more detailed Knowledge Center article on how to do this has been published here by AWS officially -
https://aws.amazon.com/premiumsupport/knowledge-center/restart-service-emr/.
You can ssh into the master node of your EMR cluster and run -
"sudo /sbin/stop hadoop-yarn-resourcemanager"
"sudo /sbin/start hadoop-yarn-resourcemanager"
commands to restart the Yarn resource manager. EMR AMI 4.x.x uses upstart - /sbin/{start,stop,restart} are all symlinks to /sbin/initctl, which is part of upstart. See the initctl man page for more information.
Alternatively, you can follow the instructions here to propagate your changes to yarn-site.xml - yarn-change-configuration-on-yarn-site-xml
For those who are gonna come from Google
In order to restart a service in EMR, perform the following actions:
Find the name of the service by running the following command:
initctl list
For example, the YARN Resource Manager service is named hadoop-yarn-resourcemanager.
Stop the service by running the following command:
sudo stop hadoop-yarn-resourcemanager
Wait a few seconds, then start the service by running the following command:
sudo start hadoop-yarn-resourcemanager
Note: Stop/start is required; do not use the restart command.
Verify that the process is running by running the following command:
sudo status hadoop-yarn-resourcemanager
Check for the process using ps, and then check the log file for any errors in the log directory /var/log/.
Source : https://aws.amazon.com/premiumsupport/knowledge-center/restart-service-emr/
If what you want to do is to enable log-aggregation, it is actually easier to create the cluster with log-aggregation already enabled, as described in the documentation:
http://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/emr-plan-debugging.html
(It is actually enabled by default if you are using emr-4.3.0).
Try restarting this service as well:
hadoop-yarn-nodemanager

Hadoop CDH3 ERROR. Could not start Hadoop datanode daemon

I'm deploying Hadoop CDH3 in pseudo-distributed mode on a VPS.
So i have installed CDH3, then i have executed
sudo apt-get install hadoop-0.20-conf-pseudo
but if i try to start all daemons with
for service in /etc/init.d/hadoop-0.20-*; do sudo $service start; done
it throws
ERROR. Could not start Hadoop datanode daemon
The same installation and starting commands works on my notebook.
I don't understand the cause. In fact the log file is empty. The available RAM is about 900MB, with 98G of available disk space.
Which can be the cause or how can i discover it? I'm excluding that the error is from the configuration files.
Consider using Cloudera Manager, it could save you some time (especially if you use multiple nodes). There is a nice video on Youtube which shows deployment process

Resources