I am trying to configure Hadoop cluster installed using Cent OS 6.x virtual machine,i have configured single node Hadoop cluster first to replicate and form the cluster from that later,but confused on configuration of static ip address for my virtual Hadoop cluster, my ifcfg-eth0 currently look like below,
DEVICE=eth0
TYPE=Ethernetle
UUID=892c57f5-17db-486d-b1b9-97efa8799bf0
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=dhcp
HWADDR=00:0C:29:5C:04:D0
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
can anyone help me to configure static address for my virtual Hadoop cluster, and also i am not able to ping any other host name other than localhost but can able to ping host address,anyone please help me to resolve this ping and static address issues.
A bridge network will help. In virtualbox select machine, settings>network
Also following is an example of the file you are editing.
/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
PROTO=static
IPADDR=10.0.1.200
NETMASK=255.255.255.0
There are other requirements. Attached link might help
http://blog.cloudera.com/blog/2014/01/how-to-create-a-simple-hadoop-cluster-with-virtualbox/
For internet access files listed below needs to be modified.
/etc/resolv.conf
#Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
Also the file which you modified for static IP /etc/sysconfig/network-scripts/ifcfg-eth0, need following additions
DNS1=8.8.8.8
DNS2=8.8.4.4
ONBOOT=yes
Related
I am using ubuntu 11.10 and have installed different virtual host on different ip like
127.0.0.2 www.example.local
127.0.0.3 www.wordpress.local
... etc
I want to test these website in IE9, for this i have install windows7 in oracle virtual box and also modified host file of windows7 and add this line 10.0.2.2 localhost at the end of the file and also create a new bridge adapted.
After all this setup i can access localhost of ubuntu from windows7, but find problem while accessing virtual host of ubuntu.
Please help me to access these virtual host on windows 7, i have already search google, ubuntu forum and stackoverflow for this but didnt find right answer.
If I understand you correctly, you want the virtual adapters ip, ie. the gateway ip.
Just add those lines you have added to Ubuntu's host file to the host file in your Windows 7 VM, replacing 127.0.0.x with 10.0.2.2 should make your Windows VM connect to your Ubuntu host for requests to those hosts. This should usually sufficient, unless your HTTP server is configured to serve www.example.local only for connection to 127.0.0.2 and similar for www.wordpress.local.
Can anyone provide Cloudera Manager 4.1 Free Version help with instructions on resolving hosts in EC2?
I have installed Cloudera Manager 4.1 Free Version on an EC2 m1.large instance. When I search for hosts using the external host name (dn1.example.com), it comes up correctly and installs the packages correctly. But, upon inspection, it does not come up. The only server that comes up is the server where Cloudera Manager is installed (ip-#-#-#-136.ec2.internal). I even tried to use the other host names for dn1 (ec2-#-#-#-47.compute-1.amazonaws.com, ip-#-#-#-152.ec2.internal) in the host searches. Both install successfully but don't show up in inspection. I'm at a loss.
Our admin has toyed around with /etc/hosts, /etc/resolv.conf, /etc/sysconfig/network. No combinations seem to work.
If there is an expert out there who can help, please can you explain what to do?
Very grateful,
Ben
I could solve the issue of cloudera successfully adding a host and not showing it in the host list by following changes
in all the host machines(including the one having cloudera manager), add similar contents to /etc/hosts file(note, i have removed the default entry localhost.localdomain)
127.0.0.1 localhost
::1 localhost6.localdomain6 localhost6
X.X.X.X master.cloudera.mydomain master
Y.Y.Y.Y slave1.cloudera.mydomain slave1
Add this extra search entry to /etc/resolv.conf(keep the default search entry as it is)
search cloudera.mydomain
Restart system/network interface for the changes to take effect.
/etc/rc.d/init.d/network restart
Now, try adding both the machines. it worked for me.
Source: Cloudera Manager fails to add hosts
ultra-noob. I have a server machine with cdh3u1 pseudo-distrib, and a client machine with a java application using the cdh3u1 API.
How do I configure the client to talk to the server? I've been googling for hours and couldn't find where is the "client configuration" file. The "hdfs-default", "core-default" and "mapred-default" and their "-site" counterparts all look like server (namenode and datanode) config to me.
Is it just "multipurpose client server" config and I should cherry-pick the attributes in these files that are appropriate to the client? which are they? probably missing something big here...
Thanks, Ido
make sure that the client machine can access the hadoop server machine ip. If you use a virtualbox for the hadoop server (cdh3 vm), then add a "host-only" network interface (see details here: host-only networking with virtualbox. I'm assuming that your static ip for the hadoop server is 192.168.56.101 and that you're able to ping it from your client.
configure a hostname for your hadoop server machine in both the server and client machine. If you want to name your hadoop server "local-elephant", add the following line to /etc/hosts in both machines: 192.168.56.101 local-elephant.
in the server machine goto /etc/hadoop/conf change the values of the following properties from "localhost" to "local-elephant": in core-site.xml the value of fs.default.name and in mapred-site.xml the value of mapred.job.tracker.
in the client machine, create core-site.xml and mapred-site.xml in the classpath of your java application. In those files put only the fs.default.name and mapred.job.tracker properties.
I have a network with some weird (as I understand) DNS server which causes Hadoop or HBase to malfunction.
It resolves my hostname to some address my machine doesn't know about (i.e. there is no such interface).
Hadoop does work if I have following entries in /etc/hosts:
127.0.0.1 localhost
127.0.1.1 myhostname
If entry "127.0.1.1 myhostname" is not present uploading file to HDFS fails and complains that it can replicate the file only to 0 datanodes instead of 1.
But in this case HBase does not work: creating a table from HBase shell causes NotAllMetaRegionsOnlineException (caused actually by HMaster trying to bind to wrong address returned by DNS server for myhostname).
In other network, I am using following /etc/hosts:
127.0.0.1 localhost
192.168.1.1 myhostname
And both Hadoop and HBase work.
The problem is that in second network the address is dynamic and I can't list it into /etc/hosts to override result returned by weird DNS.
Hadoop is run in pseudo-distributed mode. HBase also runs on single node.
Changing behavior of DNS server is not an option.
Changing "localhost" to 127.0.0.1 in hbase/conf/regionservers doesn't change anything.
Can somebody suggest a way how can I override its behavior while retaining internet connection (I actually work at client's machine through Teamviewer). Or some way to configure HBase (or Zookeeper it is managing) not to use hostname to determine address to bind?
Luckily, I've found the workaround to this DNS server problem.
DNS server returned invalid address when queried by local hostname.
HBase by default does reverse DNS lookup on local hostname to determine where to bind.
Because the address returned by DNS server was invalid, HMaster wasn't able to bind.
Workaround:
In hbase/conf/hbase-site.xml explicitly specify interfaces that will be used for master and regionserver:
<configuration>
<property>
<name>hbase.master.dns.interface</name>
<value>lo</value>
</property>
<property>
<name>hbase.regionserver.dns.interface</name>
<value>lo</value>
</property>
</configuration>
In this case, I specified loopback interface (lo) to be used for both master and regionserver.
a simple tool I wrote to check for DNS issues:
https://github.com/sujee/hadoop-dns-checker
I'm setting up Hadoop (0.20.2). For starters, I just want it to run on a single machine - I'll probably need a cluster at some point, but I'll worry about that when I get there. I got it to the point where my client code can connect to the job tracker and start jobs, but there's one problem: the job tracker is only accessible from the same machine that it's running on. I actually did a port scan with nmap, and it shows port 9001 open when scanning from the Hadoop machine, and closed when it's from somewhere else.
I tried this on three machines (one Mac, one Ubuntu, and an Ubuntu VM running in VirtualBox), it's the same. None of them have any firewalls set up, so I'm pretty sure it's a Hadoop problem. Any suggestions?
In your hadoop configuration files, does fs.default.name and mapred.job.tracker refer to localhost?
If so, then Hadoop will only listen to port 9000 and 9001 on the loopback interface, which is inaccessible from any other host. Make sure fs.default.name and mapred.job.tracker refer to your machine's externally accessible host name.
Make sure that you have not double listed your master in the /etc/hosts file.
I had the following which only allowed master to listen on 127.0.1.1
127.0.1.1 hostname master
192.168.x.x hostname master
192.168.x.x slave-1
192.168.x.x slave-2
The above answer caused the problem. I changed my /ect/hosts file to the following to make it work.
127.0.1.1 hostname
192.168.x.x hostname master
192.168.x.x slave-1
192.168.x.x slave-2
Use the command netstat -an | grep :9000 to verify your connections are working!
In addition to the above answer, I found that in /etc/hosts on the master (running ubuntu) had the line:
127.0.1.1 master
Which meant that running nslookup master on the master returned a local address - so in spite of using master in mapred-site.xml I suffered the same problem. My solution (there are probably better ones) was to create an alias in my DNS server and use that instead. I guess you could probably also change the IP address in /etc/hosts to the external one, but I haven't tried this - I'm not sure what implications it would have for other services.