Failed to resolve interface public in jboss in EC2 - amazon-ec2

I am working on JBoss AS 7.1.1.Final in amazon EC2 ( redhat server ) server. I changed my ip 127.0.0.1 to 52.32.0.197 ( public EC2 server ip ) whenever i am running my Jboss it is throwing :
Services which failed to start:service jboss.network.public:org.jboss.msc.service.StartException in service jboss.network.public: JBAS015810: failed to resolve interface public
After googling i change my entries in "/etc/hosts" which is currently look like
52.32.0.197 localhost localhost.localdomain localhost4 ocalhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
Again i got this Link and change my "/etc/sysconfig/network-scripts/ifcfg-lo" to
DEVICE=lo
IPADDR=52.32.0.197
NETMASK=255.0.0.0
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback-1
but still getting same error, please help me to resolve this ?
My standalone.xml contains
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:127.0.0.1}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:52.32.0.197}"/>
</interface>
<!-- TODO - only show this if the jacorb subsystem is added -->
<interface name="unsecure">
<!--
~ Used for IIOP sockets in the standard configuration.
~ To secure JacORB you need to setup SSL
-->
<inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/>
</interface>
</interfaces>

I am not familiar with JBOSS, but this is clearly a bad IP binding problem.
First, you must have a valid IP Address. I am surprise you didn't mentioned error throw by OS. Your public IP address cannot sit on a wrong network.
DEVICE=lo
IPADDR=52.32.0.197
NETMASK=255.255.255.0
NETWORK=52.32.0.0
Then come to the binding, as point out in the link JBAS015810: failed to resolve interface public
This kind of error could occur if you happen to have specified bind
address for JAVA_OPTS in your configs in standalone.conf
-Djboss.bind.address=192.168.xxx.xxx -Djboss.bind.address.management=192.168.xxx.xxx -Djboss.bind.address.unsecure=192.168.xxx.xxx
open standalone.conf and change those IP address you see (should be 127.0.0.1) to 52.32.0.197. Restart.

I had the same issue, i have resolved that by updating my firewall settings ,Which blocked the public IP access to the application

Related

hazelcast : unable to connect to any address in config

I have an hazelcasrCLient-xml and have configured the port to as i have limitation on using the 5701 port :
<hazelcast-client>
<group>
<name>dev</name>
<password>dev-pass</password>
</group>
<network>
<cluster-members>
<address>135.46.61.34:28019</address>
</cluster-members>
<smart-routing>true</smart-routing>
<redo-operation>true</redo-operation>
<connection-attempt-limit>10</connection-attempt-limit>
</network>
</hazelcast-client>
also for hte server side the configuration in hazelcast.xml is :
<hazelcast>
<group>
<name>dev</name>
<password>dev-pass</password>
</group>
<instance-name>hzpunInstance1</instance-name>
<network>
<port auto-increment="true">28019</port>
</network>
<partition-group enabled="false" />
<executor-service name="default">
<pool-size>16</pool-size>
<!--Queue capacity. 0 means Integer.MAX_VALUE. -->
<queue-capacity>0</queue-capacity>
</executor-service>
<hazelcast>
the server is running on cloud whereas the client in on another VM
so when the client tries to connect to the hazelcast server i get an error :
8/18/16 10:36:23:982 GMT] 00000022 ServletWrappe E com.ibm.ws.webcontainer.servlet.ServletWrapper service SRVE0014E: Uncaught service() exception root cause appServlet: org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.IllegalStateException: Unable to connect to any address in the config! The following addresses were tried:[/135.46.61.34:28019]
.........
Caused by: java.lang.IllegalStateException: Unable to connect to any address in the config! The following addresses were tried:[/135.46.61.34:28019]....
an so on
Can anyone suggest what could be the fix or where am i going wrong?
What I understand from your config is that hazelcast nodes (server-side) are configured to use port 28019 with auto-increment option activated. So potentially, used port is anywhere between 28019 and 28119 (default value of port-count is 100).
However you client is only configured to try port 28019. There is no auto-increment option for the client, it only attempts to connect to addresses specified in the client configuration (135.46.61.34:28019 in your case)... and fails.
If you are using auto-increment for your cluster, then you must explicitly add all possible addresses int the client conf. For example:
Serverver-side config
<port portcount="10" auto-increment="true">28019</port>
Client-side config
<cluster-members>
<address>135.46.61.34:28019</address>
<address>135.46.61.34:28020</address>
<address>135.46.61.34:28021</address>
<address>135.46.61.34:28022</address>
<address>135.46.61.34:28023</address>
<address>135.46.61.34:28024</address>
<address>135.46.61.34:28025</address>
<address>135.46.61.34:28026</address>
<address>135.46.61.34:28027</address>
<address>135.46.61.34:28028</address>
</cluster-members>

not able to connect remote hiveserver2 using ip address

I am runnnig hive service hiveserver2 with following config in hive-site.xml
<property>
<name>hive.server2.thrift.bind.host</name>
<value>ip_address</value>
</property>
<property>
<name>hive.server2.authentication</name>
<value>NONE</value>
</property>
From my local(same machine) jdbc client I am able to connect to hiveserver2
jdbc:hive2://id_address:10000/default
howerver when I run my jdbc client from other machine I am getting connection timeout issue.
Note that I am able to ping to hiveserver host with ip_address.
What could be issue? As I am using ip address still do I need to check entries in /etc/hosts?
Issue resolved
Issue was with firewall, hive configs were correct.
I disabled iptable for time being.
# services iptables stop
Following answer in following link was helpful for generic connection issues.
java.net.ConnectException :connection timed out: connect?

JMX connection to tomcat on VirtualBox

I have enabled JMX on my tomcat server with
-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=localhost and I have a spring app that exposes JMX beans with a JmxRemoteLifecycleListener bean:
<Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener" rmiRegistryPortPlatform="10000" rmiServerPortPlatform="10001" />
When I run this tomcat instance on virtualbox (using vagrant) I forward ports 10000 and 10001, but when I try to connect to the JMX service (tried with VisualVM and JRockit Mission Control), I am unable to connect. Is there special configuration that needs to be done to connect since it is running on VirtualBox?
You need to do the Port forwarding with IPTABLES. Just check whether port is enabled in Iptables.

Hbase Regions server is not able to communicate with HMaster

I am not able to setup the hbase in distributed mode. It works fine when i setup it on one machine(standalone mode). My Zookeeper, hmaster and region server starts properly.
But when i go to hbase shell and look for the status. It shows me 0 region server. I am attaching my logs of regions server. Plus the host files of my master(namenode) and slave(datanode). I have tried every P&C which are given on stackoverflow for changing the host file, but didn't work for me.
2013-06-24 15:03:45,844 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server naresh-pc/192.168.0.108:2181. Will not attempt to authenticate using SASL (unknown error)
2013-06-24 15:03:45,845 WARN org.apache.zookeeper.ClientCnxn: Session 0x13f75807d960001 for server null, unexpected error, closing socket connection and attempting to reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
Slave /etc/hosts :
127.0.0.1 localhost
127.0.1.1 ubuntu-pc
#ip for hadoop
192.168.0.108 master
192.168.0.126 slave
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Master /etc/hosts :
127.0.0.1 localhost
127.0.1.1 naresh-pc
#ip for hadoop
192.168.0.108 master
192.168.0.126 slave
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
hbase-site.xml :
<configuration>
<property>
<name>hbase.master</name>
<value>master:60000</value>
<description>The host and port that the HBase master runs at.
A value of 'local' runs the master and a regionserver
in a single process.
</description>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:54310/hbase</value>
<description>The directory shared by region servers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed
Zookeeper true: fully-distributed with unmanaged Zookeeper
Quorum (see hbase-env.sh)
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example,
"host1.mydomain.com,host2.mydomain.com".
By default this is set to localhost for local and
pseudo-distributed modes of operation. For a
fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If
HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop
ZooKeeper on.
</description>
</property>
</configuration>
Zookeeper log:
2013-06-28 18:22:26,781 WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x13f8ac0b91b0002, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:722)
2013-06-28 18:22:26,858 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /192.168.0.108:57447 which had sessionid 0x13f8ac0b91b0002
2013-06-28 18:25:21,001 INFO org.apache.zookeeper.server.ZooKeeperServer: Expiring session 0x13f8ac0b91b0002, timeout of 180000ms exceeded
2013-06-28 18:25:21,002 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x13f8ac0b91b0002
Master Log:
2013-06-28 18:22:41,932 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 1502022 ms
2013-06-28 18:22:43,457 INFO org.apache.hadoop.hbase.master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 1503547 ms
Remove 127.0.1.1 from hosts file and turn of IPV6. That should fix the problem.
Your Regionserver is looking for HMaster at naresh-pc but you do not have any such entry in your /etc/hosts file. Please make sure your configuration is proper.
Can you try all of this:
Make sure your /conf/regionservers file has just one entry: slave
Not sure what HBase version you are using, but instead of using port 54310 for hbase.rootdir property in your hbase-site.xml use port 9000
Your /etc/hosts file, on BOTH master and slave should should only have these custom entries:
127.0.0.1 localhost
192.168.0.108 master
192.168.0.126 slave
I am concerned that your logs state Opening socket connection to server naresh-pc/192.168.0.108:2181
Clearly the system thinks that zookeeper is on host naresh-pc, but in your config you are setting zookeeper quorum at host master, which HBase will bind to. That's a problem right there. In my experience, HBase is EXTREMELY fussy about host names, so make sure they are all in synch in all your configs and in your /etc/hosts file.
Also, this may be a minor issue, but wouldn't hurt to specify the zookeper data directory in your .xml file to have a minimum set of settings that should make the cluster work: hbase.zookeeper.property.dataDir

Hbase: How to specify hostname for Hbase master

I'm struggling to setup a Hbase distributed cluster with 2 nodes, one is my machine and one is the VM, using the "host-only" Adapter in VirtualBox.
My problem is that the region server (from VM machine) can't connect to Hbase master running on host machine. Although in Hbase shell I can list, create table, ..., in regionserver on VM machine ('slave'), the log always show
org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to connect to master. Retrying. Error was:
java.net.ConnectException: Connection refused
Previously, I've successfully setup Hadoop, HDFS and MapReduce on this cluster with 2 nodes named as 'master', and 'slave', 'master' as master node and both 'master' and 'slave' work as slave nodes, these names bound to the vboxnet0 interface of VirtualBox (the hostnames in /etc/hostname is different). I'also specify the "slave.host.name" property for each node as 'master' and 'slave'.
It seems that the Hbase master on the 'master' always run with 'localhost' hostname, from the slave machine, I can't telnet to the hbase master with 'master' hostname. So is there any way to specify hostname use for Hbase master as 'master', I've tried specify some properties about DNS interface for ZooKeeper, Master, RegionServer to use internal interface between master and slave, but it still does not work at all.
/etc/hosts for both as something like
127.0.0.1 localhost
127.0.0.1 ubuntu.mymachine
# For Hadoop
192.168.56.1 master
192.168.56.101 slave
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
The answer that #Infinity provided seems to belong to version ~0.9.4.
For version 1.1.4.
according to the source code from
org.apache.hadoop.hbase.master.HMaster
the configuration should be:
<property>
<name>hbase.master.hostname</name>
<value>master.local</value>
<!-- master.local is the DNS name in my network pointing to hbase master -->
</property>
After setting this value, region servers are able to connect to hbase master;
however, in my environment, the region server complained about:
com.google.protobuf.ServiceException: java.net.SocketException: Invalid argument
The problem disappeared after I installed oracle JDK 8 instead of open-jdk-7 in all of my nodes.
So in conclusion, here is my solution:
use dns name server instead of setting /etc/hosts, as hbase is very
picky on hostname and seems requires DNS lookup as well as reverse
DNS lookup.
upgrade jdk to oracle 8
use the setting item mentioned
above.
My host file is like
127.0.0.1 localhost
192.168.2.118 shashwat.machine.com shashwat
make your hosts file as following:
127.0.0.1 localhost
For Hadoop
192.168.56.1 master
192.168.56.101 slave
and in hbase conf put following entries :
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name>hbase.master</name>
<value>master:60000</value>
<description>The host and port that the HBase master runs at.</description>
</property>
<property>
<name>hbase.regionserver.port</name>
<value>60020</value>
<description>The host and port that the HBase master runs at.</description>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/home/cluster/Hadoop/hbase-0.90.4/temp</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper's config zoo.cfg.
The port at which the clients will connect.
</description>
</property>
If you are using localhost anywhere remove that and replace it with "master" which is name for namenode in your hostfile....
one morething you can do
sudo gedit /etc/hostname
this will open the hostname file bydefault ubuntu will be there so make it master. and restart your system.
For hbase specify in "regionserver" file inside conf dir put these entries
master
slave
and restart.everything.
There are two things that fix this class of problem for me:
1) Remove all "localhost" names, only have 127.0.0.1 pointing to the name of the hmaster node.
2) run "hostname X" on your hbase master node, to make sure the hostname matches what is in /etc/hosts.
Not being a networking expert, I can't say why this is important, but it is :)
Most of the time the error is coming from Zookeeper that send a wrong hostname.
You can check what Zookeeper sends as HBase master host:
Find Zookeeper bin folder:
bin/zkCli.sh -server 127.0.0.1:2181
get /hbase/master
This should give you the HBase master IP that answer Zookeeper, so this IP must be accessible.

Resources