Apache Ignite - setting IP and Port on EC2 - amazon-ec2

Installed Apache Ignite on EC2 and started Ignite node with:
bin/ignite.sh examples/config/example-cache.xml
That worked fine on EC2 but could not access node from remote host.
Then changed inside exmaple-cache.xml part under:
<!-- In distributed environment, replace with actual host IP address. -->
and added ip of that EC2 and then port 80 (which is open for that EC2):
<value>x.x.x.x:80</value>
Restarted Ignite but was still not accessible from remote host after that.
What is the correct way to enable remote access? Where exactly should be specified IP and Port in order to be accessible from outside of EC2?

If you use TcpDiscoveryMulticastIpFinder, you should add addresses of all nodes, that should be in cluster, for example:
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<value>127.0.0.1:47500..47509</value>
<value>127.0.0.2:47500..47509</value>
</list>
</property>
</bean>
But for AWS you also could use TcpDiscoveryS3IpFinder, that was created specially for Amazon S3. Here is a documentation
Also, all nodes in cluster should have same configuration for Discovery.

Related

Difference in telnet of amazon ec2 instance using internal and public IP

I have a 4 node hadoop cluster on ec2. We have configured Hortonworks Hadoop (HDP version 2.4) through Ambari.
I have opened all traffic for our all four instances internally and the office external IP.
Whenever I do telnet within the cluster using internal IP:
telnet <internal_ip> 2181
It is able to connect to the specific port I have my service(zookeeper) running on.
When I use public IP of the same instance(Elastic IP) instead of internal IP, I am not able to telnet either within the cluster or from my office IP:
telnet <elastic_ip> 2181
I have already configured security group to allow all traffic. I am using Ubuntu 14.04. We are not using any other firewall except AWS security group.
Please suggest how can I connect using Elastic IP/Public IP of my instance on this port.
Please find the screenshot of Security Group of EC2:
Do you use the default VPC ?
If not, check if the VPC has an Internet Gateway, the Route table (you need a route to the Internet Gateway) and the Networks ACLs.
The Route table and Network ACLs are applied to a subnet.
The default VPC is configured to allow outside traffic, not a new VPC.
Or, the Elastic IP is linked to the same network interface ? The Elastic IP is linked to a network interface of an instance.
EDIT: you can take a look on AWS doc for a better explanation:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html

How to configure ports for hostname and localhost?

I am running a browser on the single node Hortonworks Hadoop cluster(HDP 2.3.4) on Centos 6.7:
With localhost:8000 and <hostname>:8000, I can access Hue. Same works for Ambari at 8080
However, several other ports, I only can access with the hostname. So with e.g. <hostname>:50070, I can access the namenode service. If I use localhost:50070, I cannot setup a connection. So I assume localhost is blocked, the namenode not.
How can I set up that localhost and <hostname> have the same port configuration?
This likely indicates that the NameNode HTTP server socket is bound to a single network interface, but not the loopback interface. The NameNode HTTP server address is controlled by configuration property dfs.namenode.http-address in hdfs-site.xml. Typically this specifies a host name or IP address, and this maps to a single network interface. You can tell it to bind to all network interfaces by setting property dfs.namenode.http-bind-host to 0.0.0.0 (the wildcard address, matching all network interfaces). The NameNode must be restarted for this change to take effect.
There are similar properties for other Hadoop daemons. For example, YARN has a property named yarn.resourcemanager.bind-host for controlling how the ResourceManager binds to a network interface for its RPC server.
More details are in the Apache Hadoop documentation for hdfs-default.xml and yarn-default.xml. There is also full coverage of multi-homed deployments in HDFS Support for Multihomed Networks.

How to configure JBoss 7.1 on windows server 2008 AWS Amazon EC2 to access java application via a domain, elastic IP and Route 53?

How do I access a java web application hosted on Amazon AWS EC2, windows 2008 server, jboss 7.1, through an internet domain using java web server and jboss 7.1? I can access the application on localhost: 8080/webcontent server but I can not set up to access a java web application externally via an internet domain.
I have already created rules on security group 80 (HTTP) 0.0.0.0 / 0, 3389 (RDP) 0.0.0.0 / 0, 8080 (* HTTP).
I've created an elastic IP and associated the instance of my windows server 2008.
I've already configured the service Route 53 to my domain and have changed the DNS settings on the hosting service that manages my domain.
What else do I need to configure?
Someone help me please.
Thanks!
I've solved the problem.
Able to access the application from outside the server configuration in placing this file standalone.xml the Jboss:
Edit standalone/configuration/standalone.xml
<interfaces>
<interface name="management">
<inet-address value="127.0.0.1"/>
</interface>
<interface name="public">
<any-address/>
</interface>
</interfaces>

Cannot access web app on 8080 on EC2

I created an instance on EC2 and installed JBoss. I edited the standalone.xml like so:
<interface name="management">
<inet-address value="0.0.0.0"/>
</interface>
<interface name="public">
<inet-address value="0.0.0.0"/>
</interface>
Also, I enabled port 8080 for incoming tcp traffic in iptables and also added a rule to the EC2 security group config via the EC2 management console.
I verified the deployment is working fine by logging in to the server via ssh and I did:
lynx http://localhost:8080
I can see my web app running. But when I access the same from a browser using the public DNS given to me via amazon <my public DNS>:8080 I don't see anything. The browser cannot find anything.
Do I absolutely need to have an EIP on EC2 mapped to my instance so that my web app is accessible via the Internet?
Any pointers in the right direction would be very helpful.
Thanks.
I figured out what the problem was. It was iptables. I stopped the service using:
service iptables stop
It worked!
I realized I don't need iptables running on my EC2 host as amazon has security groups in place which act like a "firewall" anyway.
PS: I am not sure if this qualifies as an answer but wanted to put my answer here anyway as it might help others with similar issues.

Configuring a slave's hostname using internal IP - Multiple NICs

In my Hadoop environment, I need to configure my slave nodes so that when they communicate in the middle of a map/reduce job they use the internal IP instead of the external IP that it picks up from the hostname.
Is there any way to set up my Hadoop config files to specify that the nodes should communicate using the internal IPs instead of the external IPs? I've already used the internal IPs in my core-site.xml, master, and slave files.
I've done some research and I've seen people mention the "slave.host.name" parameter, but which config file would I place this parameter in? Are there any other solutions to this problem?
Thanks!
The IP routing tables have to be changed so that the network between the Hadoop nodes uses a particular gateway. Don't think Hadoop has any setting to change which gateway to use.
You can configured slave.host.name in mapred-site.xml for each slave node.
Also remember to use that host name (instead of IP) consistently for all other configurations (core-site.xml, hdfs-site.xml, mapred-site.xml, masters, slaves) and also /etc/hosts file.

Resources