I've installed 5 nodes on a private segment of an Amazon VPC. I'm receiving the following error when the nodes start:
These notices occurred during the startup of this instance:
[ERROR] 09/23/15-13:48:03 sudo ntpdate pool.ntp.org:
[WARN] publichostname not available as metadata
[WARN] publichostname not available as metadata
I was able to reaach out (through our NAT server) on port 80 to perform updates and log in to datastax. We're not currently using any expiration times in the schemas. I set the machines up without a public hostname,since they were only accessible through an API or by those of us in the VPN. All of the nodes are in the same availability zone, but eventually we will want to have nodes in a different zone in the same region.
My questions are:
Is this a critical error?
Should I have a public hostname on the
nodes?
Should they be on a public subnet (I would think not for
security purposes)?
Thanks in advance.
I found in this:
https://github.com/riptano/ComboAMI/blob/2.5/ds2_configure.py#L136-L147
It seems to be the source of this message, and if it that's the case, it seems harmless -- a lookup of the instance's IP address is used instead of the hostname.
If you aren't familiar with it, http://169.254.169.254/ as you will see in the code is a web server inside the EC2 infrastructure that provides an easy way to access metadata about the instance. The metadata is specific to the instance making the request, and the IP address doesn't change.
Related
I had a running instance, and then I became unable to connect to it via http(80) and ssh(22). I tried to reboot the instance, but nothing went up. This has happened to me twice in the past month.
Why does it happen? Can I do anything to fix and/or prevent it from happening?
If I launch a new instance in same region, and it works.
Things to check when trying to connect to an Amazon EC2 instance:
Security Group: Make sure the security group allows inbound access on the desired ports (eg 80, 22) for the appropriate IP address range (eg 0.0.0.0/0). This solves the majority of problems.
Public IP Address: Check that you're using the correct Public IP address for the instance. If the instance is stopped and started, it might receive a new Public IP address (depending on how it has been configured).
VPC Configuration: Accessing an EC2 instance that is launched inside a Virtual Private Cloud (VPC) requires:
An Internet Gateway
A routing table connecting the subnet to the Internet Gateway
NACLs (Network ACLS) that permit through-traffic
If you are able to launch and connect to another instance in the same subnet, then the VPC configuration would appear to be correct.
The other thing to check would be the actual configuration of the operating system on the instance itself. Some software may be affecting the configuration so that the web server / ssh daemon is not working correctly. Of course, that is hard to determine without connecting to the instance.
If you are launching from a standard Amazon Linux AMI, ssh would work correctly anytime. The web server (port 80) would require installation and configuration of software on the instance, which is your responsibility to maintain.
I have started running PIG jobs on Amazon EMR using Hadoop YARN (AMI 3.3.1) however as there is no longer a job tracker in Yarn, I can't seem to be able to find a web UI so that I can track the number of Mappers and Reducers for a MapReduce job, when I try to access the Application Master link provided in the resource manager UI page, I am told that the page doesn't exists (Picture provided below).
Does anyone know how I can access a UI through my web browser that will show me the current job status in terms of number of mappers, reducers and the % completed for each etc?
Thanks
Once you click the ApplicationMaster link from ResourceManager webpage, you'll be redirected to ApplicationMaster web ui; as EMR uses EC2 instances and each EC2 instance has 2 IP addresses associated with it, one used for private communication and another for public. EMR uses private ip addresses (private DNS) to setup hadoop hence, you'll be redirected to a url like this:
http://10.204.137.136:9046/proxy/application_1423027388806_0003/
which you could see is pointing to instance's private ip address and hence your browser cannot resolve the ip address, you just have to replace the private ip address with the public ip address (or public dns name) of that instance:
Obtaining the public ip address of an instance
Using the EC2 web interface
You could login to the AWS EC2 console and find the instance's ip address's
Using the console:
If you are logged into the instance and want to know it's public ip address then issue the following command which will give you back the public ip address of that instance.
curl http://169.254.169.254/latest/meta-data/public-ipv4
Also take a look at this AWS documentation page on how to view web interfaces which provides other options like setting up SSH tunneling and using SOCKS proxy.
I have installed a single node haodoop cluster on using Hortonworks/Ambari on Amazon's ec2 host.
Since I don't want this cluster running 24/7, I stop the instance when done. When I reboot the instance later, I get a new IP address and then ambari no longer is able to start the Hadoop related services.
Is there a way other than completely redeploying to reconfigure the cluster so the services will start?
It looks like the IP address lives in various xml files under /etc, in the postgres database table ambari, and possibly other places I haven't found yet.
I tried updating the xml files and postgres database with updated versions of the ip address, internal and external dns names as I could find them, but to no avail. I have not been able to restart the services.
The reason I am doing this is to possibly save the deployment time and data configuration on hdfs and other project specific setup each time I restart the host.
Any suggestions?
Thanks!
Elastic IP can be used. Also, since you mentioned it being a single node cluster - you can use localhost or private IP.
If you use elastic IP, your UIs will always be on the same public IP. However, if you use private IP or localhost and do not associate your instance with an elastic IP you will have to look for public IP everytime you start the instance and then connect to the web UI using the IP.
Thanks for the help, both Harman and TJ are correct. I haven't used an elastic IP because I might have more than one of these running and a time, and for now at least, I don't mind looking up the public ip address.
Harman's suggestion of using "localhost" as the fqdn when setting up ambari in the first place is a really good idea in retrospect. Unless I go through the whole setup again, that's water under the bridge for me, but I recommend this to others who might read this post.
In my case, I figured this out on my own before coming back to the page. The specific step I took was insanely simple after all, thanks to Occam's Razor.
I added the following line in /etc/hosts:
<new internal IP> <old internal dns name>
and then did
ambari-server restart. from the command line. Then I am able to restart all services after logging into ambari.
I am working through this tutorial: http://mesosphere.io/docs/getting-started/cloud-install/
Just learning on an Ubuntu instance on Digital Ocean, I let the master process bind to the public IP, and the Mesos and Marathon web interfaces became publicly accessible. No surprises there.
Do Mesos and Marathon rely on Zookeeper to create private IPs between instances? Could you skip using Zookeeper by manually setting up a private network between instances? Then the proper way to start the master and slave processes is to bind to the secondary, private IPs of each instance?
Digital Ocean can set up private IPs automatically, but this is kind of a learning exercise for me. I am aware of the broad rule that administrator access to a server shouldn't come through a public IP. Another way of phrasing this posting is, does private networking provide the security for Mesos and Marathon?
Only starting with one Ubuntu instance, running both master and slave, for now. Binding to the loopback address would fix this issue for just one machine, I realize.
ZooKeeper is used for a few different things for both Marathon and Mesos:
Leader election
Storing state
Resolving the Mesos masters
At the moment, you can't skip ZooKeeper entirely because of 2 and 3 (although later versions of Mesos have their own registry which keeps track of state). AFAIK, Mesos doesn't rely on ZooKeeper for creation of private IPs - it'll bind to whatever is available (but you can force this via the ip parameter). So, you won't be able to forgo ZooKeeper entirely with a private network.
Private networking will provide some security for Mesos and Marathon - assuming you firewall off their access to the external world.
A good (although not necessarily the best) solution for keeping the instances on a private network is to set up an OpenVPN (or similar) network to one of the masters. Then, launch each instance on its private IP and make you also set the hostname parameter to that IP. Connect to the Mesos/Marathon web consoles via their private IP and the VPN and all should resolve correctly.
Mesos and marathon doesn't create private IPs between instance.
For that, I suggest you use tinc or directly a docker image tinc
Using this, I was able to do the config you want in 5 minutes, it's easier to configure than openvpn, and each host can connect to another, no need to use a vpn server to route all the traffic.
Each node will store a private and public for connecting to each server of the private network.
You should setup a private network for using mesos.
After that, you can add in /etc/hosts all the hosts with the IP of the internal network.
You will be able to bind zookeeper using the private network :
zk://master-1:2181,master-2:2181,master-3:2181
Then the proper way to start the master and slave processes is to bind to the secondary private IPs of each instance.
Initially my issue was "How do I RDP into an EC2 instance without having to first find its ip address". To solve that I wrote a script that executes periodically on each instance. The script reads a particular tag value and updates the corresponding entry in Route53 with the public dns name of the instance.
This way I can always rdp into web-01.ec2.mydomain.com and be connected to the right instance.
As I continued with setting up my instances, I realized to setup mongodb replication, I will need to somehow refer to three separated instances. I cannot use the internal private ip addresses as they keep changing (or are prone to change on instance stop/start & when the dhcp lease expires).
Trying to access web-01.ec2.mydomain.com from within my EC2 instance returns the internal ip address of the instance. Which seems to be standard behaviour. Thus by mentioning the route53 cnames for my three instances, I can ensure that they can always be discovered by each other. I wouldn't be paying any extra data transfer charges, as the cnames will always resolve to internal ip. I would however be paying for all those route53 queries.
I can run my script every 30 secs or even lesser to ensure that the dns entries are as uptodate as possible.
At this point, I realized that what I have in place is very much an Elastic IP alternative. Maybe not completely, but surely for all my use cases. So I am wondering, whether to use Elastic IP or not. There is no charge involved as long as my instances are running. It does seem an easier option.
What do most people do? If someone with experience with this could reply, I would appreciate that.
Secondly, what happens in those few seconds/minutes during which the instance loses its current private ip and gets a new internal ip. Am assuming all existing connections get dropped. Does that affect the ELB health checks (A ping every 30 secs)? Am assuming if I were using an Elastic IP, the dns name would immediately resolve to the new ip, as opposed to say after my script executes. Assuming my script runs every 30 secs, will there be only 30secs of downtime, or can there possibly be more? Will an Elastic ip always perform better than my scripted solution?
According to the official AWS documentation a "private IP address is associated exclusively with the instance for its lifetime and is only returned to Amazon EC2 when the instance is stopped or terminated. In Amazon VPC, an instance retains its private IP addresses when the instance is stopped.". Therefore checking nevertheless every 30s if something changed seems inherently wrong. This leaves you with two obvious options:
Update the DNS once at/after boot time
Use an elastic IP and static DNS
Used elastic IPs don't cost you anything, and even parked ones cost only little. If your instances are mostly up, use an elastic IP. If they are mostly down, go the boot time update route. If your instance sits in a VPC, not even the boot time update is strictly needed (but in a VPC you probably have different needs and a more complex network setup anyways).
Another option that you could consider is to use a software defined datacenter solution such as Amazon VPC or Ravello Systems (disclaimer: our company).
Using such a solution will allow you to create a walled off private environment in the public cloud. Inside the environment you have full control, including your own private L2 network on which you manage IP addressing and can use e.g. statically allocated IPs. Communications with the outside (e.g. your app servers) happens via the IPs and ports that you configure.