ConnectionRefused when trying to connect to hadoop using cloudera - hadoop

When I try to do a command that connects with hadoop I get an exception
For example:
command: hdfs dfs -ls /user/cloudera
result:ls: Call From quickstart.cloudera/10.0.2.15 to quickstart.cloudera:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
In addition, when I used such commands before, I did not have that problem.
Thanks

You need to go to Cloudera manager in the browser. Usually this is bookmarked on Firefox as "Cloudera manager". Login "cloudera/clousera". Then start "HDFS" service. The "hadoop fs" commands should start working now.

Related

hortonworks sandbox connection refuse

i just start to learn hadoop with hortonworks sandbox
i have HDP 2.3 on a virtualBox and in the setting i have a Bridged Adapter network and a NAT,
when i start the machine everything is ok i can do some hadoop command i can connect the Ambari at 127.0.0.1:8080
but when i run the script in /etc/lib/hue/tools/start_scripts/gen_hosts.sh
to generate hosts with a different ip address every thing going wrong and i can't execute a simple hadoop command like hadoop fs -ls /user
i get this error
ls: Call From sandbox.hortonworks.com/10.24.244.85 to sandbox.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
as i said i just started to learn hadoop and i am not a network expert so i will preciate any help
thank you.
I found that you have to restart the services ( HDFS,MapReduce,Hbase,..) from
Ambari when you generate a host.
Hope this will help someone.
turn on name NameNode / HDFS it will work

Hadoop on a single node vagrant VM - Connection refused when starting start-all.sh

I have created a vagrant virtual machine and installed hadoop on that.
Only a single server cluster.
But when I try to start my hadoop on the machine it gives the following error:
mkdir: Call From master/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
And idea? The machine is named as master. The server is an Ubuntu.
Thanks!
This because hdfs nodes are not running go to,
cd HADOOP_HOME/sbin
./start-all.sh
Will start all processes.

Hadoop standalone installation - java.net.ConnectException: Connection refused error while running jar

I am new to Hadoop and I was trying to install Single node standalone Hadoop in Ubuntu 14.04. I was following the Apache Hadoop Document and as it is given there, when I tried to run
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar grep input output 'dfs[a-z.]+'
I got the java.net.ConnectException message:
Call From a1409User/127.0.0.1 to localhost:9000 failed on connection
exception: java.net.ConnectException: Connection refused; For more
details see: http://wiki.apache.org/hadoop/ConnectionRefused
I checked in http://wiki.apache.org/hadoop/ConnectionRefused where it has been asked to verify that there isn't an entry for hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts. Though this point is not so clear to me, I tried by changing the given IP and mentioning the port number but no luck. I also checked with telnet:
$ telnet localhost 9000
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
Please help me to solve the issue.
Try to format namenode. Also in your script input and output directories must be given. For example:
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar grep /user/hadoop/hadoop-config /user/hadoop/output 'dfs[a-z.]+'
After that you can check the content in the output directory by:
hdfs dfs -ls /user/hadoop/output/
It should print output as follows:
Found 2 items
-rw-r--r-- 3 hadoop supergroup 0 2014-09-05 07:55 /user/hadoop/output/_SUCCESS
-rw-r--r-- 3 hadoop supergroup 179 2014-09-05 07:55 /user/hadoop/output/part-r-00000
Confirm you are in Local (Standalone) Mode. I think you are not in the Standalone mode.
May be you have tried the another step. Make sure you are not config the etc/hadoop/core-site.xml and etc/hadoop/hdfs-site.xml.
If you are want to try Pseudo-Distributed Mode .
Try config the etc/hadoop/core-site.xml and etc/hadoop/hdfs-site.xml again.
Make sure HDFS is online. Start it by $HADOOP_HOME/sbin/start-dfs.sh

java.net.ConnectException: Connection refused error when running Hive

I'm trying work through a hive tutorial in which I enter the following:
load data local inpath '/usr/local/Cellar/hive/0.11.0/libexec/examples/files/kv1.txt' overwrite into table pokes;
Thits results in the following error:
FAILED: RuntimeException java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
I see that there are some replies on SA having to do with configuring my ip address and local host, but I'm not familiar with the concepts in the answers. I'd appreciate anything you can tell me about the fundamentals of what causes this kind of answer and how to fix it. Thanks!
This is because hive is not able to contact your namenode
Check if your hadoop services has started properly.
Run the command jps to see what all services are running.
The reason why you get this error is that Hive needs hadoop as its base. So, you need to start Hadoop first.
Here are some steps.
Step1: download hadoop and unzip it
Step2: cd #your_hadoop_path
Step3: ./bin/hadoop namenode -format
Step4: ./sbin/start-all.sh
And then, go back to #your_hive_path and start hive again
Easy way i found to edit the /etc/hosts file. default it looks like
127.0.0.1 localhost
127.0.1.1 user_user_name
just edit and make 127.0.1.1 to 127.0.0.1 thats it , restart your shell and restart your cluster by start-all.sh
same question when set up hive.
solved by change my /etc/hostname
formerly it is my user_machine_name
after I changed it to localhost, then it went well
I guess it is because hadoop may want to resolve your hostname using this /etc/hostname file, but it directed it to your user_machine_name while the hadoop service is running on localhost
I was able to resolve the issue by executing the below command:
start-all.sh
This would ensure that the Hive service has started.
Then starting the Hive was straight forward.
I had a similar problem with a connection timeout:
WARN DFSClient: Failed to connect to /10.165.0.27:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection timed out: no further information
DFSClient was resolving nodes by internal IP. Here's the solution for this:
.config("spark.hadoop.dfs.client.use.datanode.hostname", "true")

FAILED: Error in metadata: MetaException(message:Got exception: java.net.ConnectException Call to localhost/127.0.0.1:54310 failed

I am using Ubuntu 12.04, hadoop-0.23.5, hive-0.9.0.
I specified my metastore_db separately to some other place $HIVE_HOME/my_db/metastore_db in hive-site.xml
Hadoop runs fine, jps gives ResourceManager,NameNode,DataNode,NodeManager,SecondaryNameNode
Hive gets started perfectly,metastore_db & derby.log also created,and all hive commands run successfully,I can create databases,table,etc. But after few day later,when I run show databases,or show tables, get below error
FAILED: Error in metadata: MetaException(message:Got exception: java.net.ConnectException Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
I had this problem too and the accepted answer did not help me so will add my solution here for others:
My problem was I had a single machine with a pseudo distributed set up installed with hive. It was working fine with localhost as the host name. However when we decided to add multiple machines to the cluster we also decided to give the machines proper names "machine01, machine 02 etc etc".
I changed all the hadoop conf/*-site.xml files and the hive-site.xml file too but still had the error. After exhaustive research I realized that in the metastore db hive was picking up the URIs not from *-site files, but from the metastore tables in mysql. Where all the hive table meta data was saved are two tables SDS and DBS. Upon changing the DB_LOCATION_URI column and LOCATION in the tables DBS and SDS respectively to point to the latest namenode URI, I was back in business.
Hope this helps others.
reasons for this
If you changed your Hadoop/Hive version,you may be specifying previous hadoop version (which has ds.default.name=hdfs://localhost:54310 in core-site.xml) in your hive-0.9.0/conf/hive-env.sh
file
$HADOOP_HOME may be point to some other location
Specified version of Hadoop is not working
your namenode may be in safe mode ,run bin/hdfs dfsadmin -safemode leave or bin/hadoop dsfadmin -safemode leave
In case of fresh installation
the above problem can be the effect of a name node issue
try formatting the namenode using the command
hadoop namenode -format
1.Turn off your namenode from safe mode. Try the commands below:
hadoop dfsadmin -safemode leave
2.Restart your Hadoop daemons:
sudo service hadoop-master stop
sudo service hadoop-master start

Resources