I have done an installation of Ambari with Hadoop on my cluster.
I'm behind a proxy, and I have no problem during the installation...
When I start all service on Ambari, I have this errors :
HDFS DataNode Process -> Connection failed: [Errno 111] Connection refused to 0.0.0.0:50010
HDFS DataNode Web UI -> Connection failed to http://testslave3.ingc.com:50075
I tried many possibilities, but I don’t know where is the problem…
If you have any suggestion… thanks a lot :)
Are you using MySQL as the DB repository? If so, after upgrading to the latest mysql-connector-java, and registering this to the ambari-server, I no longer had this issue:
Get the mysql-connector-java RPM:
wget ftp://rpmfind.net/linux/fedora/linux/development/rawhide/x86_64/os/Packages/m/mysql-connector-java-5.1.36-1.fc23.noarch.rpm
Install the RPM:
rpm -ivh mysql-connector-java-5.1.36-1.fc23.noarch.rpm
Confirm that .jar is in the Java share directory
ls /usr/share/java/mysql-connector-java.jar
Modify ambari-server to use this connector:
ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
Related
In normal docker environment HDFS clustered images like hadoop-master and hadoop-slave works fine. But when I try to run these images in swarm mode, I am facing connectivity issues. Is clustered hdfs compatible with docker swarm?
The service that I deployed is restarting and exiting continously for every 2-3 seconds.
Can someone help me in detail to implement HDFS clustering in swarm mode.
When I do docker logs conatinerid, I get
start sshd...
/bin/sh: 0: Can't open /bin/which
/etc/init.d/ssh: 424: .: Can't open /lib/lsb/init-functions.d/20-left-info-blocks
start serf...
Error connecting to Serf agent: dial tcp 127.0.0.1:7373: connection refused
Obviously, you don't have /bin/which nor LSB support installed.
Install all prerequisites.
When I restarted my cluster, ambari didn't start because of a db check failed config:
sudo service ambari-server restart --skip-database-check
Using python /usr/bin/python
Restarting ambari-server
Waiting for server stop...
Ambari Server stopped
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari Server is starting with the database consistency check skipped. Do not make any changes to your cluster topology or perform a cluster upgrade until you correct the database consistency issues. See "/var/log/ambari-server/ambari-server-check-database.log" for more details on the consistency issues.
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.....................
DB configs consistency check failed. Run "ambari-server start --skip-database-check" to skip. You may try --auto-fix-database flag to attempt to fix issues automatically. If you use this "--skip-database-check" option, do not make any changes to your cluster topology or perform a cluster upgrade until you correct the database consistency issues. See /var/log/ambari-server/ambari-server-check-database.log for more details on the consistency issues.
ERROR: Exiting with exit code -1.
REASON: Ambari Server java process has stopped. Please check the logs for more information.
I looked in the logs in "/var/log/ambari-server/ambari-server-check-database.log", and I saw:
2017-08-23 08:16:13,445 INFO - Checking Topology tables
2017-08-23 08:16:13,447 ERROR - Your topology request hierarchy is not complete for each row in topology_request should exist at least one raw in topology_logical_request, topology_host_request, topology_host_task, topology_logical_task.
I tried both options --auto-fix-database and --skip-database-check, it didn't work.
It seems that postgresql didn't start correctly, and even if in the log of Ambari there was no mention of postgresql not started or not available, but it was weird that ambari couldn't access to the topology configuration stored in it.
sudo service postgresql restart
Stopping postgresql service: [ OK ]
Starting postgresql service: [ OK ]
It did the trick:
sudo service ambari-server restart
Using python /usr/bin/python
Restarting ambari-server
Ambari Server is not running
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start.........
Server started listening on 8080
i just start to learn hadoop with hortonworks sandbox
i have HDP 2.3 on a virtualBox and in the setting i have a Bridged Adapter network and a NAT,
when i start the machine everything is ok i can do some hadoop command i can connect the Ambari at 127.0.0.1:8080
but when i run the script in /etc/lib/hue/tools/start_scripts/gen_hosts.sh
to generate hosts with a different ip address every thing going wrong and i can't execute a simple hadoop command like hadoop fs -ls /user
i get this error
ls: Call From sandbox.hortonworks.com/10.24.244.85 to sandbox.hortonworks.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
as i said i just started to learn hadoop and i am not a network expert so i will preciate any help
thank you.
I found that you have to restart the services ( HDFS,MapReduce,Hbase,..) from
Ambari when you generate a host.
Hope this will help someone.
turn on name NameNode / HDFS it will work
I have created a vagrant virtual machine and installed hadoop on that.
Only a single server cluster.
But when I try to start my hadoop on the machine it gives the following error:
mkdir: Call From master/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
And idea? The machine is named as master. The server is an Ubuntu.
Thanks!
This because hdfs nodes are not running go to,
cd HADOOP_HOME/sbin
./start-all.sh
Will start all processes.
I'm trying work through a hive tutorial in which I enter the following:
load data local inpath '/usr/local/Cellar/hive/0.11.0/libexec/examples/files/kv1.txt' overwrite into table pokes;
Thits results in the following error:
FAILED: RuntimeException java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
I see that there are some replies on SA having to do with configuring my ip address and local host, but I'm not familiar with the concepts in the answers. I'd appreciate anything you can tell me about the fundamentals of what causes this kind of answer and how to fix it. Thanks!
This is because hive is not able to contact your namenode
Check if your hadoop services has started properly.
Run the command jps to see what all services are running.
The reason why you get this error is that Hive needs hadoop as its base. So, you need to start Hadoop first.
Here are some steps.
Step1: download hadoop and unzip it
Step2: cd #your_hadoop_path
Step3: ./bin/hadoop namenode -format
Step4: ./sbin/start-all.sh
And then, go back to #your_hive_path and start hive again
Easy way i found to edit the /etc/hosts file. default it looks like
127.0.0.1 localhost
127.0.1.1 user_user_name
just edit and make 127.0.1.1 to 127.0.0.1 thats it , restart your shell and restart your cluster by start-all.sh
same question when set up hive.
solved by change my /etc/hostname
formerly it is my user_machine_name
after I changed it to localhost, then it went well
I guess it is because hadoop may want to resolve your hostname using this /etc/hostname file, but it directed it to your user_machine_name while the hadoop service is running on localhost
I was able to resolve the issue by executing the below command:
start-all.sh
This would ensure that the Hive service has started.
Then starting the Hive was straight forward.
I had a similar problem with a connection timeout:
WARN DFSClient: Failed to connect to /10.165.0.27:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection timed out: no further information
DFSClient was resolving nodes by internal IP. Here's the solution for this:
.config("spark.hadoop.dfs.client.use.datanode.hostname", "true")