How to resolve hosts using Cloudera Manager 4.0 installed on EC2 - hadoop

Can anyone provide Cloudera Manager 4.1 Free Version help with instructions on resolving hosts in EC2?
I have installed Cloudera Manager 4.1 Free Version on an EC2 m1.large instance. When I search for hosts using the external host name (dn1.example.com), it comes up correctly and installs the packages correctly. But, upon inspection, it does not come up. The only server that comes up is the server where Cloudera Manager is installed (ip-#-#-#-136.ec2.internal). I even tried to use the other host names for dn1 (ec2-#-#-#-47.compute-1.amazonaws.com, ip-#-#-#-152.ec2.internal) in the host searches. Both install successfully but don't show up in inspection. I'm at a loss.
Our admin has toyed around with /etc/hosts, /etc/resolv.conf, /etc/sysconfig/network. No combinations seem to work.
If there is an expert out there who can help, please can you explain what to do?
Very grateful,
Ben

I could solve the issue of cloudera successfully adding a host and not showing it in the host list by following changes
in all the host machines(including the one having cloudera manager), add similar contents to /etc/hosts file(note, i have removed the default entry localhost.localdomain)
127.0.0.1 localhost
::1 localhost6.localdomain6 localhost6
X.X.X.X master.cloudera.mydomain master
Y.Y.Y.Y slave1.cloudera.mydomain slave1
Add this extra search entry to /etc/resolv.conf(keep the default search entry as it is)
search cloudera.mydomain
Restart system/network interface for the changes to take effect.
/etc/rc.d/init.d/network restart
Now, try adding both the machines. it worked for me.
Source: Cloudera Manager fails to add hosts

Related

No access to web or mysql on vagrant after upgrading to macos monterey

Last week I decided to upgrade the mac to the latest version Monterey. Well. Most things works, except for Vagrant. Well.. it works, except there is almost no connection to the server.
vagrant ssh works.
I have been able to launch virtualbox, but access to http or mysql is not happening.
I know the mysql-server is running. The same goes with the apache server.
Logs have been checked and I cannot see that any traffic going to the server.
Ping is not working.
I have updated virtualbox. I have destroyed the box and upgraded vagrant / homestead. still no luck.
MORE INFO:
When I run traceroute I see that the first hit is the correct IP I have set in hosts file. Then it goes to 192.168.0.1 which isn't going anywhere.
I guess the 192.168.0.1 comes from the mac virtualbox / vagrant is running on.
Any pointers on what to do next are welcome.
Probably the same problem as mine (I couldn't use any longer IP 192.168.10.10). VirtualBox did some changes lately (from VirtualBox 6.1.28 I think) and a new configuration is needed to use your preferred (192.168.0.1) IP address:
On Linux, macOS and Solaris Oracle VM VirtualBox will only allow IP
addresses in 192.168.56.0/21 range to be assigned to host-only
adapters. For IPv6 only link-local addresses are allowed. If other
ranges are desired, they can be enabled by creating
/etc/vbox/networks.conf and specifying allowed ranges there. For
example, to allow 10.0.0.0/8 and 192.168.0.0/16 IPv4 ranges as well as
2001::/64 range put the following lines into /etc/vbox/networks.conf:
* 10.0.0.0/8 192.168.0.0/16
* 2001::/64
You can check the whole information here.
Alternatively (skipping the networks.conf configuration) you can use any IP from the initially supported range like for instance: 192.168.56.10

installation failed on all hosts cloudera manager on Debian 8.4

I tried to follow the instruction mentioned here , but i still have to same problem : the hosts still unknown for CDH and the error is :
Installation failed. Failed to receive heartbeat from agent.
Ensure that the host's hostname is configured properly.
Ensure that port 7182 is accessible on the Cloudera Manager server (check firewall rules).
Ensure that ports 9000 and 9001 are free on the host being added.
Check agent logs in /var/log/cloudera-scm-agent/ on the host being added (some of the logs can be found in the installation details).
By the way im using Debian 8.4 jessie distribution as operating system
Can you help me to resolve this problem please .

cant ping hadoop hostname

I am trying to configure Hadoop cluster installed using Cent OS 6.x virtual machine,i have configured single node Hadoop cluster first to replicate and form the cluster from that later,but confused on configuration of static ip address for my virtual Hadoop cluster, my ifcfg-eth0 currently look like below,
DEVICE=eth0
TYPE=Ethernetle
UUID=892c57f5-17db-486d-b1b9-97efa8799bf0
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=dhcp
HWADDR=00:0C:29:5C:04:D0
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
can anyone help me to configure static address for my virtual Hadoop cluster, and also i am not able to ping any other host name other than localhost but can able to ping host address,anyone please help me to resolve this ping and static address issues.
A bridge network will help. In virtualbox select machine, settings>network
Also following is an example of the file you are editing.
/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
PROTO=static
IPADDR=10.0.1.200
NETMASK=255.255.255.0
There are other requirements. Attached link might help
http://blog.cloudera.com/blog/2014/01/how-to-create-a-simple-hadoop-cluster-with-virtualbox/
For internet access files listed below needs to be modified.
/etc/resolv.conf
#Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
Also the file which you modified for static IP /etc/sysconfig/network-scripts/ifcfg-eth0, need following additions
DNS1=8.8.8.8
DNS2=8.8.4.4
ONBOOT=yes

How to set up Virtual Hosting on a Mac (local)

How can I setup a virtual hosting on my Mac. It should be easy by adding a 127.0.0.1 line to the hosts file and editing the apache configuration. But for some reason, it doesn't work with me...
And no, MAMP is not an option since you can't setup vhosts in MAMP (except if you buy the MAMP Pro...)
Any advice?
It works just fine, i have several vhosts set up on my OS X box under apache. Make sure that you are using name-based virtual hosting AND that if you are using apache that is distributed with OS X and the vhosts config is in a separate file (e.g. in /etc/apache2/other/httpd-vhosts.conf) that it is actually getting included. Also you can always test apachectl -t -D DUMP_VHOSTS to see if the virtual hosts are actually getting defined.

Hadoop job tracker only accessible from localhost

I'm setting up Hadoop (0.20.2). For starters, I just want it to run on a single machine - I'll probably need a cluster at some point, but I'll worry about that when I get there. I got it to the point where my client code can connect to the job tracker and start jobs, but there's one problem: the job tracker is only accessible from the same machine that it's running on. I actually did a port scan with nmap, and it shows port 9001 open when scanning from the Hadoop machine, and closed when it's from somewhere else.
I tried this on three machines (one Mac, one Ubuntu, and an Ubuntu VM running in VirtualBox), it's the same. None of them have any firewalls set up, so I'm pretty sure it's a Hadoop problem. Any suggestions?
In your hadoop configuration files, does fs.default.name and mapred.job.tracker refer to localhost?
If so, then Hadoop will only listen to port 9000 and 9001 on the loopback interface, which is inaccessible from any other host. Make sure fs.default.name and mapred.job.tracker refer to your machine's externally accessible host name.
Make sure that you have not double listed your master in the /etc/hosts file.
I had the following which only allowed master to listen on 127.0.1.1
127.0.1.1 hostname master
192.168.x.x hostname master
192.168.x.x slave-1
192.168.x.x slave-2
The above answer caused the problem. I changed my /ect/hosts file to the following to make it work.
127.0.1.1 hostname
192.168.x.x hostname master
192.168.x.x slave-1
192.168.x.x slave-2
Use the command netstat -an | grep :9000 to verify your connections are working!
In addition to the above answer, I found that in /etc/hosts on the master (running ubuntu) had the line:
127.0.1.1 master
Which meant that running nslookup master on the master returned a local address - so in spite of using master in mapred-site.xml I suffered the same problem. My solution (there are probably better ones) was to create an alias in my DNS server and use that instead. I guess you could probably also change the IP address in /etc/hosts to the external one, but I haven't tried this - I'm not sure what implications it would have for other services.

Resources